00:00:00.001 Started by upstream project "autotest-per-patch" build number 126169 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.180 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.239 Using shallow fetch with depth 1 00:00:00.239 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.239 > git --version # timeout=10 00:00:00.290 > git --version # 'git version 2.39.2' 00:00:00.290 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.320 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.552 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.564 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.576 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.576 > git config core.sparsecheckout # timeout=10 00:00:05.588 > git read-tree -mu HEAD # timeout=10 00:00:05.604 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.627 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.627 > git rev-list --no-walk b0ebb039b16703d64cc7534b6e0fa0780ed1e683 # timeout=10 00:00:05.742 [Pipeline] Start of Pipeline 00:00:05.756 [Pipeline] library 00:00:05.757 Loading library shm_lib@master 00:00:05.757 Library shm_lib@master is cached. Copying from home. 00:00:05.774 [Pipeline] node 00:00:20.776 Still waiting to schedule task 00:00:20.776 ‘CYP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘CYP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.776 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP19’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP3’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM10’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM13’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM25’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM26’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM27’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘VM-host-WFP1’ is offline 00:00:20.777 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WCP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WCP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP31’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP33’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP67’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘ipxe-staging’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.777 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:11:56.123 Running on VM-host-WFP1 in /var/jenkins/workspace/freebsd-vg-autotest 00:11:56.125 [Pipeline] { 00:11:56.137 [Pipeline] catchError 00:11:56.138 [Pipeline] { 00:11:56.151 [Pipeline] wrap 00:11:56.161 [Pipeline] { 00:11:56.167 [Pipeline] stage 00:11:56.169 [Pipeline] { (Prologue) 00:11:56.185 [Pipeline] echo 00:11:56.186 Node: VM-host-WFP1 00:11:56.191 [Pipeline] cleanWs 00:11:56.211 [WS-CLEANUP] Deleting project workspace... 00:11:56.211 [WS-CLEANUP] Deferred wipeout is used... 00:11:56.218 [WS-CLEANUP] done 00:11:56.407 [Pipeline] setCustomBuildProperty 00:11:56.469 [Pipeline] httpRequest 00:11:56.508 [Pipeline] echo 00:11:56.510 Sorcerer 10.211.164.101 is alive 00:11:56.517 [Pipeline] httpRequest 00:11:56.522 HttpMethod: GET 00:11:56.522 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:56.524 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:56.527 Response Code: HTTP/1.1 200 OK 00:11:56.528 Success: Status code 200 is in the accepted range: 200,404 00:11:56.528 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:57.421 [Pipeline] sh 00:11:57.717 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:57.734 [Pipeline] httpRequest 00:11:57.751 [Pipeline] echo 00:11:57.753 Sorcerer 10.211.164.101 is alive 00:11:57.762 [Pipeline] httpRequest 00:11:57.767 HttpMethod: GET 00:11:57.768 URL: http://10.211.164.101/packages/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:11:57.769 Sending request to url: http://10.211.164.101/packages/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:11:57.772 Response Code: HTTP/1.1 200 OK 00:11:57.772 Success: Status code 200 is in the accepted range: 200,404 00:11:57.773 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:12:14.045 [Pipeline] sh 00:12:14.361 + tar --no-same-owner -xf spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:12:16.984 [Pipeline] sh 00:12:17.274 + git -C spdk log --oneline -n5 00:12:17.274 62a72093c bdev: Add bdev_enable_histogram filter 00:12:17.274 719d03c6a sock/uring: only register net impl if supported 00:12:17.274 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:12:17.274 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:12:17.274 6c7c1f57e accel: add sequence outstanding stat 00:12:17.298 [Pipeline] writeFile 00:12:17.319 [Pipeline] sh 00:12:17.606 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:17.618 [Pipeline] sh 00:12:17.982 + cat autorun-spdk.conf 00:12:17.983 SPDK_TEST_UNITTEST=1 00:12:17.983 SPDK_RUN_VALGRIND=0 00:12:17.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:17.983 SPDK_TEST_NVME=1 00:12:17.983 SPDK_TEST_BLOCKDEV=1 00:12:17.983 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:17.993 RUN_NIGHTLY=0 00:12:17.996 [Pipeline] } 00:12:18.015 [Pipeline] // stage 00:12:18.033 [Pipeline] stage 00:12:18.036 [Pipeline] { (Run VM) 00:12:18.053 [Pipeline] sh 00:12:18.350 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:18.350 + echo 'Start stage prepare_nvme.sh' 00:12:18.350 Start stage prepare_nvme.sh 00:12:18.350 + [[ -n 3 ]] 00:12:18.350 + disk_prefix=ex3 00:12:18.350 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:12:18.350 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:12:18.350 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:12:18.350 ++ SPDK_TEST_UNITTEST=1 00:12:18.350 ++ SPDK_RUN_VALGRIND=0 00:12:18.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:18.350 ++ SPDK_TEST_NVME=1 00:12:18.350 ++ SPDK_TEST_BLOCKDEV=1 00:12:18.350 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:18.350 ++ RUN_NIGHTLY=0 00:12:18.350 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:12:18.350 + nvme_files=() 00:12:18.350 + declare -A nvme_files 00:12:18.350 + backend_dir=/var/lib/libvirt/images/backends 00:12:18.350 + nvme_files['nvme.img']=5G 00:12:18.350 + nvme_files['nvme-cmb.img']=5G 00:12:18.350 + nvme_files['nvme-multi0.img']=4G 00:12:18.350 + nvme_files['nvme-multi1.img']=4G 00:12:18.350 + nvme_files['nvme-multi2.img']=4G 00:12:18.350 + nvme_files['nvme-openstack.img']=8G 00:12:18.350 + nvme_files['nvme-zns.img']=5G 00:12:18.350 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:18.350 + (( SPDK_TEST_FTL == 1 )) 00:12:18.350 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:18.350 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:18.350 + for nvme in "${!nvme_files[@]}" 00:12:18.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:12:18.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:18.350 + for nvme in "${!nvme_files[@]}" 00:12:18.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:12:18.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:18.351 + for nvme in "${!nvme_files[@]}" 00:12:18.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:12:18.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:18.351 + for nvme in "${!nvme_files[@]}" 00:12:18.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:12:18.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:18.351 + for nvme in "${!nvme_files[@]}" 00:12:18.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:12:18.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:18.351 + for nvme in "${!nvme_files[@]}" 00:12:18.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:12:18.610 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:18.610 + for nvme in "${!nvme_files[@]}" 00:12:18.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:12:18.610 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:18.610 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:12:18.610 + echo 'End stage prepare_nvme.sh' 00:12:18.610 End stage prepare_nvme.sh 00:12:18.624 [Pipeline] sh 00:12:18.914 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:18.914 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -H -a -v -f freebsd14 00:12:18.914 00:12:18.914 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:12:18.914 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:12:18.914 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:12:18.914 HELP=0 00:12:18.914 DRY_RUN=0 00:12:18.914 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img, 00:12:18.914 NVME_DISKS_TYPE=nvme, 00:12:18.914 NVME_AUTO_CREATE=0 00:12:18.914 NVME_DISKS_NAMESPACES=, 00:12:18.914 NVME_CMB=, 00:12:18.914 NVME_PMR=, 00:12:18.914 NVME_ZNS=, 00:12:18.914 NVME_MS=, 00:12:18.914 NVME_FDP=, 00:12:18.914 SPDK_VAGRANT_DISTRO=freebsd14 00:12:18.914 SPDK_VAGRANT_VMCPU=10 00:12:18.914 SPDK_VAGRANT_VMRAM=14336 00:12:18.914 SPDK_VAGRANT_PROVIDER=libvirt 00:12:18.914 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:12:18.914 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:18.914 SPDK_OPENSTACK_NETWORK=0 00:12:18.914 VAGRANT_PACKAGE_BOX=0 00:12:18.914 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:12:18.914 FORCE_DISTRO=true 00:12:18.914 VAGRANT_BOX_VERSION= 00:12:18.914 EXTRA_VAGRANTFILES= 00:12:18.914 NIC_MODEL=e1000 00:12:18.914 00:12:18.914 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt' 00:12:18.914 /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:12:21.456 Bringing machine 'default' up with 'libvirt' provider... 00:12:22.840 ==> default: Creating image (snapshot of base box volume). 00:12:23.101 ==> default: Creating domain with the following settings... 00:12:23.101 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1721036324_0aaa1e6b2ef09697f2df 00:12:23.101 ==> default: -- Domain type: kvm 00:12:23.101 ==> default: -- Cpus: 10 00:12:23.101 ==> default: -- Feature: acpi 00:12:23.101 ==> default: -- Feature: apic 00:12:23.101 ==> default: -- Feature: pae 00:12:23.101 ==> default: -- Memory: 14336M 00:12:23.101 ==> default: -- Memory Backing: hugepages: 00:12:23.101 ==> default: -- Management MAC: 00:12:23.101 ==> default: -- Loader: 00:12:23.101 ==> default: -- Nvram: 00:12:23.101 ==> default: -- Base box: spdk/freebsd14 00:12:23.101 ==> default: -- Storage pool: default 00:12:23.101 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1721036324_0aaa1e6b2ef09697f2df.img (32G) 00:12:23.101 ==> default: -- Volume Cache: default 00:12:23.101 ==> default: -- Kernel: 00:12:23.101 ==> default: -- Initrd: 00:12:23.101 ==> default: -- Graphics Type: vnc 00:12:23.101 ==> default: -- Graphics Port: -1 00:12:23.101 ==> default: -- Graphics IP: 127.0.0.1 00:12:23.101 ==> default: -- Graphics Password: Not defined 00:12:23.101 ==> default: -- Video Type: cirrus 00:12:23.101 ==> default: -- Video VRAM: 9216 00:12:23.102 ==> default: -- Sound Type: 00:12:23.102 ==> default: -- Keymap: en-us 00:12:23.102 ==> default: -- TPM Path: 00:12:23.102 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:23.102 ==> default: -- Command line args: 00:12:23.102 ==> default: -> value=-device, 00:12:23.102 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:23.102 ==> default: -> value=-drive, 00:12:23.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:12:23.102 ==> default: -> value=-device, 00:12:23.102 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:23.363 ==> default: Creating shared folders metadata... 00:12:23.624 ==> default: Starting domain. 00:12:26.176 ==> default: Waiting for domain to get an IP address... 00:12:58.363 ==> default: Waiting for SSH to become available... 00:13:06.488 ==> default: Configuring and enabling network interfaces... 00:13:13.227 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:13:28.112 ==> default: Mounting SSHFS shared folder... 00:13:29.490 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:13:29.490 ==> default: Checking Mount.. 00:13:30.868 ==> default: Folder Successfully Mounted! 00:13:30.868 ==> default: Running provisioner: file... 00:13:32.245 default: ~/.gitconfig => .gitconfig 00:13:32.505 00:13:32.505 SUCCESS! 00:13:32.505 00:13:32.505 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt and type "vagrant ssh" to use. 00:13:32.505 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:32.505 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt" to destroy all trace of vm. 00:13:32.505 00:13:32.514 [Pipeline] } 00:13:32.532 [Pipeline] // stage 00:13:32.540 [Pipeline] dir 00:13:32.541 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt 00:13:32.542 [Pipeline] { 00:13:32.553 [Pipeline] catchError 00:13:32.555 [Pipeline] { 00:13:32.572 [Pipeline] sh 00:13:32.855 + vagrant ssh-config --host vagrant 00:13:32.855 + sed -ne /^Host/,$p 00:13:32.855 + tee ssh_conf 00:13:36.147 Host vagrant 00:13:36.147 HostName 192.168.121.123 00:13:36.147 User vagrant 00:13:36.147 Port 22 00:13:36.147 UserKnownHostsFile /dev/null 00:13:36.147 StrictHostKeyChecking no 00:13:36.147 PasswordAuthentication no 00:13:36.147 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:13:36.147 IdentitiesOnly yes 00:13:36.147 LogLevel FATAL 00:13:36.147 ForwardAgent yes 00:13:36.147 ForwardX11 yes 00:13:36.147 00:13:36.163 [Pipeline] withEnv 00:13:36.166 [Pipeline] { 00:13:36.182 [Pipeline] sh 00:13:36.466 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:13:36.466 source /etc/os-release 00:13:36.466 [[ -e /image.version ]] && img=$(< /image.version) 00:13:36.466 # Minimal, systemd-like check. 00:13:36.466 if [[ -e /.dockerenv ]]; then 00:13:36.466 # Clear garbage from the node's name: 00:13:36.466 # agt-er_autotest_547-896 -> autotest_547-896 00:13:36.466 # $HOSTNAME is the actual container id 00:13:36.466 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:36.466 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:13:36.466 # We can assume this is a mount from a host where container is running, 00:13:36.466 # so fetch its hostname to easily identify the target swarm worker. 00:13:36.466 container="$(< /etc/hostname) ($agent)" 00:13:36.466 else 00:13:36.466 # Fallback 00:13:36.466 container=$agent 00:13:36.466 fi 00:13:36.466 fi 00:13:36.466 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:36.466 00:13:36.479 [Pipeline] } 00:13:36.502 [Pipeline] // withEnv 00:13:36.511 [Pipeline] setCustomBuildProperty 00:13:36.528 [Pipeline] stage 00:13:36.530 [Pipeline] { (Tests) 00:13:36.554 [Pipeline] sh 00:13:36.840 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:37.114 [Pipeline] sh 00:13:37.398 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:13:37.673 [Pipeline] timeout 00:13:37.674 Timeout set to expire in 1 hr 30 min 00:13:37.676 [Pipeline] { 00:13:37.695 [Pipeline] sh 00:13:37.980 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:13:38.549 HEAD is now at 62a72093c bdev: Add bdev_enable_histogram filter 00:13:38.562 [Pipeline] sh 00:13:38.845 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:13:39.118 [Pipeline] sh 00:13:39.401 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:39.418 [Pipeline] sh 00:13:39.701 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:13:39.960 ++ readlink -f spdk_repo 00:13:39.960 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:39.960 + [[ -n /home/vagrant/spdk_repo ]] 00:13:39.960 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:39.960 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:39.960 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:39.960 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:39.960 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:39.960 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:13:39.960 + cd /home/vagrant/spdk_repo 00:13:39.960 + source /etc/os-release 00:13:39.960 ++ NAME=FreeBSD 00:13:39.960 ++ VERSION=14.0-RELEASE 00:13:39.960 ++ VERSION_ID=14.0 00:13:39.960 ++ ID=freebsd 00:13:39.960 ++ ANSI_COLOR='0;31' 00:13:39.960 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:13:39.960 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:13:39.960 ++ HOME_URL=https://FreeBSD.org/ 00:13:39.960 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:13:39.960 + uname -a 00:13:39.960 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:13:39.960 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:39.960 Contigmem (not present) 00:13:39.960 Buffer Size: not set 00:13:39.960 Num Buffers: not set 00:13:39.960 00:13:39.960 00:13:39.960 Type BDF Vendor Device Driver 00:13:39.960 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:13:39.960 + rm -f /tmp/spdk-ld-path 00:13:39.960 + source autorun-spdk.conf 00:13:39.960 ++ SPDK_TEST_UNITTEST=1 00:13:39.960 ++ SPDK_RUN_VALGRIND=0 00:13:39.960 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:39.960 ++ SPDK_TEST_NVME=1 00:13:39.960 ++ SPDK_TEST_BLOCKDEV=1 00:13:39.960 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:39.960 ++ RUN_NIGHTLY=0 00:13:39.960 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:39.960 + [[ -n '' ]] 00:13:39.960 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:39.960 + for M in /var/spdk/build-*-manifest.txt 00:13:39.960 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:39.960 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:39.960 + for M in /var/spdk/build-*-manifest.txt 00:13:39.960 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:39.960 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:39.960 ++ uname 00:13:39.960 + [[ FreeBSD == \L\i\n\u\x ]] 00:13:39.960 + dmesg_pid=1225 00:13:39.960 + tail -F /var/log/messages 00:13:39.960 + [[ FreeBSD == FreeBSD ]] 00:13:39.960 + export LC_ALL=C LC_CTYPE=C 00:13:39.960 + LC_ALL=C 00:13:39.960 + LC_CTYPE=C 00:13:39.960 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:39.960 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:39.960 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:39.960 + [[ -x /usr/src/fio-static/fio ]] 00:13:39.960 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:39.960 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:39.960 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:39.960 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:13:39.960 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:13:39.960 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:13:39.960 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:39.960 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:39.960 Test configuration: 00:13:39.960 SPDK_TEST_UNITTEST=1 00:13:39.960 SPDK_RUN_VALGRIND=0 00:13:39.960 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:39.961 SPDK_TEST_NVME=1 00:13:39.961 SPDK_TEST_BLOCKDEV=1 00:13:39.961 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:40.219 RUN_NIGHTLY=0 09:40:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.219 09:40:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:40.219 09:40:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.219 09:40:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.219 09:40:08 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:40.219 09:40:08 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:40.219 09:40:08 -- paths/export.sh@4 -- $ export PATH 00:13:40.219 09:40:08 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:40.219 09:40:08 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:40.219 09:40:08 -- common/autobuild_common.sh@444 -- $ date +%s 00:13:40.219 09:40:08 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721036408.XXXXXX 00:13:40.219 09:40:08 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721036408.XXXXXX.6k4tQ4LkQz 00:13:40.219 09:40:08 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:13:40.219 09:40:08 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:13:40.219 09:40:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:40.219 09:40:08 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:40.219 09:40:08 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:40.219 09:40:08 -- common/autobuild_common.sh@460 -- $ get_config_params 00:13:40.219 09:40:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:13:40.219 09:40:08 -- common/autotest_common.sh@10 -- $ set +x 00:13:40.478 09:40:08 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:13:40.478 09:40:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:13:40.478 09:40:08 -- pm/common@17 -- $ local monitor 00:13:40.478 09:40:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:40.478 09:40:08 -- pm/common@25 -- $ sleep 1 00:13:40.478 09:40:08 -- pm/common@21 -- $ date +%s 00:13:40.478 09:40:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721036408 00:13:40.478 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721036408_collect-vmstat.pm.log 00:13:41.410 09:40:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:13:41.410 09:40:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:41.410 09:40:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:41.410 09:40:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:41.410 09:40:09 -- spdk/autobuild.sh@16 -- $ date -u 00:13:41.410 Mon Jul 15 09:40:09 UTC 2024 00:13:41.410 09:40:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:41.410 v24.09-pre-203-g62a72093c 00:13:41.410 09:40:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:13:41.410 09:40:09 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:13:41.410 09:40:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:41.410 09:40:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:41.410 09:40:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:41.410 09:40:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:41.410 09:40:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:41.410 09:40:09 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:13:41.410 09:40:09 -- spdk/autobuild.sh@58 -- $ unittest_build 00:13:41.410 09:40:09 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:13:41.410 09:40:09 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:13:41.410 09:40:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:13:41.410 09:40:09 -- common/autotest_common.sh@10 -- $ set +x 00:13:41.410 ************************************ 00:13:41.410 START TEST unittest_build 00:13:41.410 ************************************ 00:13:41.410 09:40:09 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:13:41.410 09:40:09 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:13:42.341 Notice: Vhost, rte_vhost library, virtio, and fuse 00:13:42.341 are only supported on Linux. Turning off default feature. 00:13:42.341 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:42.341 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:43.278 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:13:43.278 Using 'verbs' RDMA provider 00:13:56.104 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:14:06.080 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:14:06.080 Creating mk/config.mk...done. 00:14:06.080 Creating mk/cc.flags.mk...done. 00:14:06.080 Type 'gmake' to build. 00:14:06.080 09:40:33 unittest_build -- common/autobuild_common.sh@412 -- $ gmake -j10 00:14:06.338 gmake[1]: Nothing to be done for 'all'. 00:14:11.610 ps: stdin: not a terminal 00:14:16.880 The Meson build system 00:14:16.880 Version: 1.4.0 00:14:16.880 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:14:16.880 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:16.880 Build type: native build 00:14:16.880 Program cat found: YES (/bin/cat) 00:14:16.880 Project name: DPDK 00:14:16.880 Project version: 24.03.0 00:14:16.880 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:14:16.880 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:14:16.880 Host machine cpu family: x86_64 00:14:16.880 Host machine cpu: x86_64 00:14:16.880 Message: ## Building in Developer Mode ## 00:14:16.880 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:14:16.880 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:14:16.880 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:14:16.880 Program python3 found: YES (/usr/local/bin/python3.9) 00:14:16.880 Program cat found: YES (/bin/cat) 00:14:16.880 Compiler for C supports arguments -march=native: YES 00:14:16.880 Checking for size of "void *" : 8 00:14:16.880 Checking for size of "void *" : 8 (cached) 00:14:16.880 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:14:16.880 Library m found: YES 00:14:16.880 Library numa found: NO 00:14:16.880 Library fdt found: NO 00:14:16.880 Library execinfo found: YES 00:14:16.880 Has header "execinfo.h" : YES 00:14:16.880 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:14:16.880 Run-time dependency libarchive found: NO (tried pkgconfig) 00:14:16.880 Run-time dependency libbsd found: NO (tried pkgconfig) 00:14:16.880 Run-time dependency jansson found: NO (tried pkgconfig) 00:14:16.880 Run-time dependency openssl found: YES 3.0.13 00:14:16.880 Run-time dependency libpcap found: NO (tried pkgconfig) 00:14:16.880 Library pcap found: YES 00:14:16.880 Has header "pcap.h" with dependency -lpcap: YES 00:14:16.880 Compiler for C supports arguments -Wcast-qual: YES 00:14:16.880 Compiler for C supports arguments -Wdeprecated: YES 00:14:16.880 Compiler for C supports arguments -Wformat: YES 00:14:16.880 Compiler for C supports arguments -Wformat-nonliteral: YES 00:14:16.880 Compiler for C supports arguments -Wformat-security: YES 00:14:16.880 Compiler for C supports arguments -Wmissing-declarations: YES 00:14:16.880 Compiler for C supports arguments -Wmissing-prototypes: YES 00:14:16.880 Compiler for C supports arguments -Wnested-externs: YES 00:14:16.880 Compiler for C supports arguments -Wold-style-definition: YES 00:14:16.880 Compiler for C supports arguments -Wpointer-arith: YES 00:14:16.881 Compiler for C supports arguments -Wsign-compare: YES 00:14:16.881 Compiler for C supports arguments -Wstrict-prototypes: YES 00:14:16.881 Compiler for C supports arguments -Wundef: YES 00:14:16.881 Compiler for C supports arguments -Wwrite-strings: YES 00:14:16.881 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:14:16.881 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:14:16.881 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:14:16.881 Compiler for C supports arguments -mavx512f: YES 00:14:16.881 Checking if "AVX512 checking" compiles: YES 00:14:16.881 Fetching value of define "__SSE4_2__" : 1 00:14:16.881 Fetching value of define "__AES__" : 1 00:14:16.881 Fetching value of define "__AVX__" : 1 00:14:16.881 Fetching value of define "__AVX2__" : 1 00:14:16.881 Fetching value of define "__AVX512BW__" : 1 00:14:16.881 Fetching value of define "__AVX512CD__" : 1 00:14:16.881 Fetching value of define "__AVX512DQ__" : 1 00:14:16.881 Fetching value of define "__AVX512F__" : 1 00:14:16.881 Fetching value of define "__AVX512VL__" : 1 00:14:16.881 Fetching value of define "__PCLMUL__" : 1 00:14:16.881 Fetching value of define "__RDRND__" : 1 00:14:16.881 Fetching value of define "__RDSEED__" : 1 00:14:16.881 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:14:16.881 Fetching value of define "__znver1__" : (undefined) 00:14:16.881 Fetching value of define "__znver2__" : (undefined) 00:14:16.881 Fetching value of define "__znver3__" : (undefined) 00:14:16.881 Fetching value of define "__znver4__" : (undefined) 00:14:16.881 Compiler for C supports arguments -Wno-format-truncation: NO 00:14:16.881 Message: lib/log: Defining dependency "log" 00:14:16.881 Message: lib/kvargs: Defining dependency "kvargs" 00:14:16.881 Message: lib/telemetry: Defining dependency "telemetry" 00:14:16.881 Checking if "Detect argument count for CPU_OR" compiles: YES 00:14:16.881 Checking for function "getentropy" : YES 00:14:16.881 Message: lib/eal: Defining dependency "eal" 00:14:16.881 Message: lib/ring: Defining dependency "ring" 00:14:16.881 Message: lib/rcu: Defining dependency "rcu" 00:14:16.881 Message: lib/mempool: Defining dependency "mempool" 00:14:16.881 Message: lib/mbuf: Defining dependency "mbuf" 00:14:16.881 Fetching value of define "__PCLMUL__" : 1 (cached) 00:14:16.881 Fetching value of define "__AVX512F__" : 1 (cached) 00:14:16.881 Fetching value of define "__AVX512BW__" : 1 (cached) 00:14:16.881 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:14:16.881 Fetching value of define "__AVX512VL__" : 1 (cached) 00:14:16.881 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:14:16.881 Compiler for C supports arguments -mpclmul: YES 00:14:16.881 Compiler for C supports arguments -maes: YES 00:14:16.881 Compiler for C supports arguments -mavx512f: YES (cached) 00:14:16.881 Compiler for C supports arguments -mavx512bw: YES 00:14:16.881 Compiler for C supports arguments -mavx512dq: YES 00:14:16.881 Compiler for C supports arguments -mavx512vl: YES 00:14:16.881 Compiler for C supports arguments -mvpclmulqdq: YES 00:14:16.881 Compiler for C supports arguments -mavx2: YES 00:14:16.881 Compiler for C supports arguments -mavx: YES 00:14:16.881 Message: lib/net: Defining dependency "net" 00:14:16.881 Message: lib/meter: Defining dependency "meter" 00:14:16.881 Message: lib/ethdev: Defining dependency "ethdev" 00:14:16.881 Message: lib/pci: Defining dependency "pci" 00:14:16.881 Message: lib/cmdline: Defining dependency "cmdline" 00:14:16.881 Message: lib/hash: Defining dependency "hash" 00:14:16.881 Message: lib/timer: Defining dependency "timer" 00:14:16.881 Message: lib/compressdev: Defining dependency "compressdev" 00:14:16.881 Message: lib/cryptodev: Defining dependency "cryptodev" 00:14:16.881 Message: lib/dmadev: Defining dependency "dmadev" 00:14:16.881 Compiler for C supports arguments -Wno-cast-qual: YES 00:14:16.881 Message: lib/reorder: Defining dependency "reorder" 00:14:16.881 Message: lib/security: Defining dependency "security" 00:14:16.881 Has header "linux/userfaultfd.h" : NO 00:14:16.881 Has header "linux/vduse.h" : NO 00:14:16.881 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:14:16.881 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:14:16.881 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:14:16.881 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:14:16.881 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:14:16.881 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:14:16.881 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:14:16.881 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:14:16.881 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:14:16.881 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:14:16.881 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:14:16.881 Program doxygen found: YES (/usr/local/bin/doxygen) 00:14:16.881 Configuring doxy-api-html.conf using configuration 00:14:16.881 Configuring doxy-api-man.conf using configuration 00:14:16.881 Program mandb found: NO 00:14:16.881 Program sphinx-build found: NO 00:14:16.881 Configuring rte_build_config.h using configuration 00:14:16.881 Message: 00:14:16.881 ================= 00:14:16.881 Applications Enabled 00:14:16.881 ================= 00:14:16.881 00:14:16.881 apps: 00:14:16.881 00:14:16.881 00:14:16.881 Message: 00:14:16.881 ================= 00:14:16.881 Libraries Enabled 00:14:16.881 ================= 00:14:16.881 00:14:16.881 libs: 00:14:16.881 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:14:16.881 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:14:16.881 cryptodev, dmadev, reorder, security, 00:14:16.881 00:14:16.881 Message: 00:14:16.881 =============== 00:14:16.881 Drivers Enabled 00:14:16.881 =============== 00:14:16.881 00:14:16.881 common: 00:14:16.881 00:14:16.881 bus: 00:14:16.881 pci, vdev, 00:14:16.881 mempool: 00:14:16.881 ring, 00:14:16.881 dma: 00:14:16.881 00:14:16.881 net: 00:14:16.881 00:14:16.881 crypto: 00:14:16.881 00:14:16.881 compress: 00:14:16.881 00:14:16.881 00:14:16.881 Message: 00:14:16.881 ================= 00:14:16.881 Content Skipped 00:14:16.881 ================= 00:14:16.881 00:14:16.881 apps: 00:14:16.881 dumpcap: explicitly disabled via build config 00:14:16.881 graph: explicitly disabled via build config 00:14:16.881 pdump: explicitly disabled via build config 00:14:16.881 proc-info: explicitly disabled via build config 00:14:16.881 test-acl: explicitly disabled via build config 00:14:16.881 test-bbdev: explicitly disabled via build config 00:14:16.881 test-cmdline: explicitly disabled via build config 00:14:16.881 test-compress-perf: explicitly disabled via build config 00:14:16.881 test-crypto-perf: explicitly disabled via build config 00:14:16.881 test-dma-perf: explicitly disabled via build config 00:14:16.881 test-eventdev: explicitly disabled via build config 00:14:16.881 test-fib: explicitly disabled via build config 00:14:16.881 test-flow-perf: explicitly disabled via build config 00:14:16.881 test-gpudev: explicitly disabled via build config 00:14:16.881 test-mldev: explicitly disabled via build config 00:14:16.881 test-pipeline: explicitly disabled via build config 00:14:16.881 test-pmd: explicitly disabled via build config 00:14:16.881 test-regex: explicitly disabled via build config 00:14:16.881 test-sad: explicitly disabled via build config 00:14:16.881 test-security-perf: explicitly disabled via build config 00:14:16.881 00:14:16.881 libs: 00:14:16.881 argparse: explicitly disabled via build config 00:14:16.881 metrics: explicitly disabled via build config 00:14:16.881 acl: explicitly disabled via build config 00:14:16.881 bbdev: explicitly disabled via build config 00:14:16.881 bitratestats: explicitly disabled via build config 00:14:16.881 bpf: explicitly disabled via build config 00:14:16.881 cfgfile: explicitly disabled via build config 00:14:16.881 distributor: explicitly disabled via build config 00:14:16.881 efd: explicitly disabled via build config 00:14:16.881 eventdev: explicitly disabled via build config 00:14:16.881 dispatcher: explicitly disabled via build config 00:14:16.881 gpudev: explicitly disabled via build config 00:14:16.881 gro: explicitly disabled via build config 00:14:16.881 gso: explicitly disabled via build config 00:14:16.881 ip_frag: explicitly disabled via build config 00:14:16.881 jobstats: explicitly disabled via build config 00:14:16.881 latencystats: explicitly disabled via build config 00:14:16.881 lpm: explicitly disabled via build config 00:14:16.881 member: explicitly disabled via build config 00:14:16.881 pcapng: explicitly disabled via build config 00:14:16.881 power: only supported on Linux 00:14:16.881 rawdev: explicitly disabled via build config 00:14:16.881 regexdev: explicitly disabled via build config 00:14:16.881 mldev: explicitly disabled via build config 00:14:16.881 rib: explicitly disabled via build config 00:14:16.881 sched: explicitly disabled via build config 00:14:16.881 stack: explicitly disabled via build config 00:14:16.881 vhost: only supported on Linux 00:14:16.881 ipsec: explicitly disabled via build config 00:14:16.881 pdcp: explicitly disabled via build config 00:14:16.881 fib: explicitly disabled via build config 00:14:16.881 port: explicitly disabled via build config 00:14:16.881 pdump: explicitly disabled via build config 00:14:16.881 table: explicitly disabled via build config 00:14:16.881 pipeline: explicitly disabled via build config 00:14:16.881 graph: explicitly disabled via build config 00:14:16.881 node: explicitly disabled via build config 00:14:16.881 00:14:16.881 drivers: 00:14:16.881 common/cpt: not in enabled drivers build config 00:14:16.881 common/dpaax: not in enabled drivers build config 00:14:16.881 common/iavf: not in enabled drivers build config 00:14:16.881 common/idpf: not in enabled drivers build config 00:14:16.881 common/ionic: not in enabled drivers build config 00:14:16.881 common/mvep: not in enabled drivers build config 00:14:16.881 common/octeontx: not in enabled drivers build config 00:14:16.881 bus/auxiliary: not in enabled drivers build config 00:14:16.881 bus/cdx: not in enabled drivers build config 00:14:16.881 bus/dpaa: not in enabled drivers build config 00:14:16.881 bus/fslmc: not in enabled drivers build config 00:14:16.881 bus/ifpga: not in enabled drivers build config 00:14:16.881 bus/platform: not in enabled drivers build config 00:14:16.881 bus/uacce: not in enabled drivers build config 00:14:16.881 bus/vmbus: not in enabled drivers build config 00:14:16.881 common/cnxk: not in enabled drivers build config 00:14:16.881 common/mlx5: not in enabled drivers build config 00:14:16.881 common/nfp: not in enabled drivers build config 00:14:16.881 common/nitrox: not in enabled drivers build config 00:14:16.881 common/qat: not in enabled drivers build config 00:14:16.881 common/sfc_efx: not in enabled drivers build config 00:14:16.881 mempool/bucket: not in enabled drivers build config 00:14:16.881 mempool/cnxk: not in enabled drivers build config 00:14:16.881 mempool/dpaa: not in enabled drivers build config 00:14:16.881 mempool/dpaa2: not in enabled drivers build config 00:14:16.882 mempool/octeontx: not in enabled drivers build config 00:14:16.882 mempool/stack: not in enabled drivers build config 00:14:16.882 dma/cnxk: not in enabled drivers build config 00:14:16.882 dma/dpaa: not in enabled drivers build config 00:14:16.882 dma/dpaa2: not in enabled drivers build config 00:14:16.882 dma/hisilicon: not in enabled drivers build config 00:14:16.882 dma/idxd: not in enabled drivers build config 00:14:16.882 dma/ioat: not in enabled drivers build config 00:14:16.882 dma/skeleton: not in enabled drivers build config 00:14:16.882 net/af_packet: not in enabled drivers build config 00:14:16.882 net/af_xdp: not in enabled drivers build config 00:14:16.882 net/ark: not in enabled drivers build config 00:14:16.882 net/atlantic: not in enabled drivers build config 00:14:16.882 net/avp: not in enabled drivers build config 00:14:16.882 net/axgbe: not in enabled drivers build config 00:14:16.882 net/bnx2x: not in enabled drivers build config 00:14:16.882 net/bnxt: not in enabled drivers build config 00:14:16.882 net/bonding: not in enabled drivers build config 00:14:16.882 net/cnxk: not in enabled drivers build config 00:14:16.882 net/cpfl: not in enabled drivers build config 00:14:16.882 net/cxgbe: not in enabled drivers build config 00:14:16.882 net/dpaa: not in enabled drivers build config 00:14:16.882 net/dpaa2: not in enabled drivers build config 00:14:16.882 net/e1000: not in enabled drivers build config 00:14:16.882 net/ena: not in enabled drivers build config 00:14:16.882 net/enetc: not in enabled drivers build config 00:14:16.882 net/enetfec: not in enabled drivers build config 00:14:16.882 net/enic: not in enabled drivers build config 00:14:16.882 net/failsafe: not in enabled drivers build config 00:14:16.882 net/fm10k: not in enabled drivers build config 00:14:16.882 net/gve: not in enabled drivers build config 00:14:16.882 net/hinic: not in enabled drivers build config 00:14:16.882 net/hns3: not in enabled drivers build config 00:14:16.882 net/i40e: not in enabled drivers build config 00:14:16.882 net/iavf: not in enabled drivers build config 00:14:16.882 net/ice: not in enabled drivers build config 00:14:16.882 net/idpf: not in enabled drivers build config 00:14:16.882 net/igc: not in enabled drivers build config 00:14:16.882 net/ionic: not in enabled drivers build config 00:14:16.882 net/ipn3ke: not in enabled drivers build config 00:14:16.882 net/ixgbe: not in enabled drivers build config 00:14:16.882 net/mana: not in enabled drivers build config 00:14:16.882 net/memif: not in enabled drivers build config 00:14:16.882 net/mlx4: not in enabled drivers build config 00:14:16.882 net/mlx5: not in enabled drivers build config 00:14:16.882 net/mvneta: not in enabled drivers build config 00:14:16.882 net/mvpp2: not in enabled drivers build config 00:14:16.882 net/netvsc: not in enabled drivers build config 00:14:16.882 net/nfb: not in enabled drivers build config 00:14:16.882 net/nfp: not in enabled drivers build config 00:14:16.882 net/ngbe: not in enabled drivers build config 00:14:16.882 net/null: not in enabled drivers build config 00:14:16.882 net/octeontx: not in enabled drivers build config 00:14:16.882 net/octeon_ep: not in enabled drivers build config 00:14:16.882 net/pcap: not in enabled drivers build config 00:14:16.882 net/pfe: not in enabled drivers build config 00:14:16.882 net/qede: not in enabled drivers build config 00:14:16.882 net/ring: not in enabled drivers build config 00:14:16.882 net/sfc: not in enabled drivers build config 00:14:16.882 net/softnic: not in enabled drivers build config 00:14:16.882 net/tap: not in enabled drivers build config 00:14:16.882 net/thunderx: not in enabled drivers build config 00:14:16.882 net/txgbe: not in enabled drivers build config 00:14:16.882 net/vdev_netvsc: not in enabled drivers build config 00:14:16.882 net/vhost: not in enabled drivers build config 00:14:16.882 net/virtio: not in enabled drivers build config 00:14:16.882 net/vmxnet3: not in enabled drivers build config 00:14:16.882 raw/*: missing internal dependency, "rawdev" 00:14:16.882 crypto/armv8: not in enabled drivers build config 00:14:16.882 crypto/bcmfs: not in enabled drivers build config 00:14:16.882 crypto/caam_jr: not in enabled drivers build config 00:14:16.882 crypto/ccp: not in enabled drivers build config 00:14:16.882 crypto/cnxk: not in enabled drivers build config 00:14:16.882 crypto/dpaa_sec: not in enabled drivers build config 00:14:16.882 crypto/dpaa2_sec: not in enabled drivers build config 00:14:16.882 crypto/ipsec_mb: not in enabled drivers build config 00:14:16.882 crypto/mlx5: not in enabled drivers build config 00:14:16.882 crypto/mvsam: not in enabled drivers build config 00:14:16.882 crypto/nitrox: not in enabled drivers build config 00:14:16.882 crypto/null: not in enabled drivers build config 00:14:16.882 crypto/octeontx: not in enabled drivers build config 00:14:16.882 crypto/openssl: not in enabled drivers build config 00:14:16.882 crypto/scheduler: not in enabled drivers build config 00:14:16.882 crypto/uadk: not in enabled drivers build config 00:14:16.882 crypto/virtio: not in enabled drivers build config 00:14:16.882 compress/isal: not in enabled drivers build config 00:14:16.882 compress/mlx5: not in enabled drivers build config 00:14:16.882 compress/nitrox: not in enabled drivers build config 00:14:16.882 compress/octeontx: not in enabled drivers build config 00:14:16.882 compress/zlib: not in enabled drivers build config 00:14:16.882 regex/*: missing internal dependency, "regexdev" 00:14:16.882 ml/*: missing internal dependency, "mldev" 00:14:16.882 vdpa/*: missing internal dependency, "vhost" 00:14:16.882 event/*: missing internal dependency, "eventdev" 00:14:16.882 baseband/*: missing internal dependency, "bbdev" 00:14:16.882 gpu/*: missing internal dependency, "gpudev" 00:14:16.882 00:14:16.882 00:14:17.141 Build targets in project: 81 00:14:17.141 00:14:17.141 DPDK 24.03.0 00:14:17.141 00:14:17.141 User defined options 00:14:17.141 buildtype : debug 00:14:17.141 default_library : static 00:14:17.141 libdir : lib 00:14:17.141 prefix : / 00:14:17.141 c_args : -fPIC -Werror 00:14:17.141 c_link_args : 00:14:17.141 cpu_instruction_set: native 00:14:17.141 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:14:17.141 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:14:17.141 enable_docs : false 00:14:17.141 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:14:17.141 enable_kmods : true 00:14:17.141 max_lcores : 128 00:14:17.141 tests : false 00:14:17.141 00:14:17.141 Found ninja-1.11.1 at /usr/local/bin/ninja 00:14:17.707 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:14:17.707 [1/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:14:17.966 [2/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:14:17.966 [3/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:14:17.966 [4/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:14:17.966 [5/233] Linking static target lib/librte_kvargs.a 00:14:17.966 [6/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:14:17.966 [7/233] Linking static target lib/librte_log.a 00:14:17.966 [8/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:14:18.225 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:14:18.225 [10/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:14:18.484 [11/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:14:18.484 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:14:18.484 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:14:18.484 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:14:18.484 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:14:18.484 [16/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:14:18.484 [17/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:14:18.484 [18/233] Linking static target lib/librte_telemetry.a 00:14:18.484 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:14:18.484 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:14:18.743 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:14:18.743 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:14:18.743 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:14:19.003 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:14:19.003 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:14:19.003 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:14:19.003 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:14:19.003 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:14:19.003 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:14:19.003 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:14:19.262 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:14:19.262 [32/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:14:19.262 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:14:19.262 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:14:19.262 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:14:19.262 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:14:19.262 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:14:19.521 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:14:19.521 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:14:19.521 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:14:19.521 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:14:19.521 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:14:19.521 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:14:19.521 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:14:19.521 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:14:19.780 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:14:19.780 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:14:19.780 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:14:19.780 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:14:20.039 [50/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:14:20.039 [51/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:14:20.039 [52/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:14:20.039 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:14:20.039 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:14:20.039 [55/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:14:20.039 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:14:20.039 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:14:20.039 [58/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:14:20.298 [59/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:14:20.298 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:14:20.298 [61/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:14:20.298 [62/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:14:20.298 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:14:20.298 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:14:20.298 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:14:20.298 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:14:20.557 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:14:20.557 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:14:20.557 [69/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:14:20.557 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:14:20.557 [71/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:14:20.557 [72/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:14:20.817 [73/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:14:20.817 [74/233] Linking static target lib/librte_eal.a 00:14:20.817 [75/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:14:20.817 [76/233] Linking static target lib/librte_rcu.a 00:14:20.817 [77/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:14:20.817 [78/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:14:20.817 [79/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:14:20.817 [80/233] Linking static target lib/librte_ring.a 00:14:21.076 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:14:21.076 [82/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.076 [83/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.076 [84/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:14:21.076 [85/233] Linking static target lib/librte_mempool.a 00:14:21.076 [86/233] Linking target lib/librte_log.so.24.1 00:14:21.076 [87/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:14:21.336 [88/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:14:21.336 [89/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.336 [90/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:14:21.336 [91/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:14:21.336 [92/233] Linking target lib/librte_kvargs.so.24.1 00:14:21.336 [93/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:14:21.336 [94/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:14:21.336 [95/233] Linking target lib/librte_telemetry.so.24.1 00:14:21.336 [96/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.336 [97/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:14:21.336 [98/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:14:21.336 [99/233] Linking static target lib/librte_mbuf.a 00:14:21.594 [100/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:14:21.594 [101/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:14:21.594 [102/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:14:21.594 [103/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:14:21.594 [104/233] Linking static target lib/librte_meter.a 00:14:21.594 [105/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:14:21.594 [106/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:14:21.853 [107/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:14:21.853 [108/233] Linking static target lib/librte_net.a 00:14:21.853 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:14:22.113 [110/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:14:22.113 [111/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:14:22.113 [112/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:14:22.371 [113/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:14:22.371 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:14:22.631 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:14:22.631 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:14:22.631 [117/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:14:22.891 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:14:22.891 [119/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:14:22.891 [120/233] Linking static target lib/librte_pci.a 00:14:22.891 [121/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:14:22.891 [122/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:14:22.891 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:14:22.891 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:14:23.150 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:14:23.150 [126/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:14:23.150 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:14:23.150 [128/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:14:23.150 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:14:23.150 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:14:23.150 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:14:23.150 [132/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:14:23.150 [133/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:14:23.150 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:14:23.150 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:14:23.150 [136/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:14:23.150 [137/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:14:23.150 [138/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:14:23.409 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:14:23.409 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:14:23.409 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:14:23.409 [142/233] Linking static target lib/librte_ethdev.a 00:14:23.669 [143/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:14:23.669 [144/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:14:23.669 [145/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:23.669 [146/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:14:23.669 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:14:24.513 [148/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:14:24.513 [149/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:14:24.513 [150/233] Linking static target lib/librte_cmdline.a 00:14:24.513 [151/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:14:24.513 [152/233] Linking static target lib/librte_timer.a 00:14:24.513 [153/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:14:24.513 [154/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:14:24.513 [155/233] Linking static target lib/librte_compressdev.a 00:14:24.513 [156/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:14:24.513 [157/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:14:24.513 [158/233] Linking static target lib/librte_hash.a 00:14:24.513 [159/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:14:24.973 [160/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:14:24.973 [161/233] Linking static target lib/librte_dmadev.a 00:14:24.973 [162/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.973 [163/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:14:24.973 [164/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:14:24.973 [165/233] Linking static target lib/librte_cryptodev.a 00:14:24.973 [166/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:14:24.973 [167/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:14:24.973 [168/233] Linking static target lib/librte_reorder.a 00:14:24.973 [169/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:14:24.973 [170/233] Linking static target lib/librte_security.a 00:14:24.973 [171/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.973 [172/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:14:24.973 [173/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:14:25.255 [174/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.255 [175/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.255 [176/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.255 [177/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:14:25.255 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:14:25.255 [179/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:14:25.255 [180/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.523 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.523 [182/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:14:25.523 [183/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:14:25.781 [184/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:14:25.781 [185/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:14:25.781 [186/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:14:25.781 [187/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:25.781 [188/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:25.781 [189/233] Linking static target drivers/librte_bus_pci.a 00:14:26.039 [190/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:14:26.039 [191/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:26.039 [192/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:26.039 [193/233] Linking static target drivers/librte_mempool_ring.a 00:14:26.039 [194/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:14:26.039 [195/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:26.039 [196/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:26.039 [197/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:26.039 [198/233] Linking static target drivers/librte_bus_vdev.a 00:14:26.039 [199/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:26.298 [200/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:27.232 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:14:27.232 machine -> /usr/src/sys/amd64/include 00:14:27.232 x86 -> /usr/src/sys/x86/include 00:14:27.232 i386 -> /usr/src/sys/i386/include 00:14:27.232 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:14:27.232 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:14:27.232 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:14:27.232 touch opt_global.h 00:14:27.232 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:14:27.232 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:14:27.232 :> export_syms 00:14:27.232 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:14:27.232 objcopy --strip-debug contigmem.ko 00:14:27.489 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:14:27.490 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:14:27.490 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:14:27.490 :> export_syms 00:14:27.490 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:14:27.490 objcopy --strip-debug nic_uio.ko 00:14:30.770 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:33.301 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:14:33.301 [205/233] Linking target lib/librte_eal.so.24.1 00:14:33.301 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:14:33.301 [207/233] Linking target lib/librte_timer.so.24.1 00:14:33.301 [208/233] Linking target lib/librte_pci.so.24.1 00:14:33.301 [209/233] Linking target lib/librte_dmadev.so.24.1 00:14:33.301 [210/233] Linking target lib/librte_meter.so.24.1 00:14:33.301 [211/233] Linking target lib/librte_ring.so.24.1 00:14:33.301 [212/233] Linking target drivers/librte_bus_vdev.so.24.1 00:14:33.301 [213/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:14:33.301 [214/233] Linking target drivers/librte_bus_pci.so.24.1 00:14:33.559 [215/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:14:33.559 [216/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:14:33.559 [217/233] Linking target lib/librte_mempool.so.24.1 00:14:33.559 [218/233] Linking target lib/librte_rcu.so.24.1 00:14:33.559 [219/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:14:33.559 [220/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:14:33.816 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:14:33.816 [222/233] Linking target lib/librte_mbuf.so.24.1 00:14:33.816 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:14:33.816 [224/233] Linking target lib/librte_net.so.24.1 00:14:33.816 [225/233] Linking target lib/librte_reorder.so.24.1 00:14:33.816 [226/233] Linking target lib/librte_compressdev.so.24.1 00:14:33.816 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:14:34.074 [228/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:14:34.074 [229/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:14:34.074 [230/233] Linking target lib/librte_hash.so.24.1 00:14:34.074 [231/233] Linking target lib/librte_cmdline.so.24.1 00:14:34.074 [232/233] Linking target lib/librte_security.so.24.1 00:14:34.074 [233/233] Linking target lib/librte_ethdev.so.24.1 00:14:34.074 INFO: autodetecting backend as ninja 00:14:34.074 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:35.451 CC lib/ut_mock/mock.o 00:14:35.451 CC lib/log/log.o 00:14:35.451 CC lib/log/log_flags.o 00:14:35.451 CC lib/log/log_deprecated.o 00:14:35.451 CC lib/ut/ut.o 00:14:35.451 LIB libspdk_ut_mock.a 00:14:35.451 LIB libspdk_log.a 00:14:35.451 LIB libspdk_ut.a 00:14:35.451 CC lib/dma/dma.o 00:14:35.451 CC lib/ioat/ioat.o 00:14:35.451 CXX lib/trace_parser/trace.o 00:14:35.451 CC lib/util/base64.o 00:14:35.451 CC lib/util/bit_array.o 00:14:35.451 CC lib/util/cpuset.o 00:14:35.451 CC lib/util/crc16.o 00:14:35.451 CC lib/util/crc32.o 00:14:35.451 CC lib/util/crc32c.o 00:14:35.451 CC lib/util/crc32_ieee.o 00:14:35.451 CC lib/util/crc64.o 00:14:35.451 CC lib/util/dif.o 00:14:35.451 CC lib/util/fd.o 00:14:35.451 CC lib/util/file.o 00:14:35.451 LIB libspdk_dma.a 00:14:35.451 CC lib/util/hexlify.o 00:14:35.711 CC lib/util/iov.o 00:14:35.711 CC lib/util/math.o 00:14:35.711 CC lib/util/pipe.o 00:14:35.711 LIB libspdk_ioat.a 00:14:35.711 CC lib/util/strerror_tls.o 00:14:35.711 CC lib/util/string.o 00:14:35.711 CC lib/util/uuid.o 00:14:35.711 CC lib/util/fd_group.o 00:14:35.711 CC lib/util/xor.o 00:14:35.711 CC lib/util/zipf.o 00:14:35.971 LIB libspdk_util.a 00:14:35.971 CC lib/rdma_provider/rdma_provider_verbs.o 00:14:35.971 CC lib/rdma_provider/common.o 00:14:35.971 CC lib/conf/conf.o 00:14:35.971 CC lib/vmd/vmd.o 00:14:35.971 CC lib/vmd/led.o 00:14:35.971 CC lib/rdma_utils/rdma_utils.o 00:14:35.971 CC lib/env_dpdk/env.o 00:14:35.971 CC lib/json/json_parse.o 00:14:35.971 CC lib/idxd/idxd.o 00:14:35.971 CC lib/json/json_util.o 00:14:36.230 CC lib/env_dpdk/memory.o 00:14:36.230 LIB libspdk_rdma_provider.a 00:14:36.230 CC lib/idxd/idxd_user.o 00:14:36.230 CC lib/env_dpdk/pci.o 00:14:36.230 LIB libspdk_conf.a 00:14:36.230 CC lib/json/json_write.o 00:14:36.230 LIB libspdk_rdma_utils.a 00:14:36.230 CC lib/env_dpdk/init.o 00:14:36.230 CC lib/env_dpdk/threads.o 00:14:36.230 CC lib/env_dpdk/pci_ioat.o 00:14:36.230 LIB libspdk_vmd.a 00:14:36.230 CC lib/env_dpdk/pci_virtio.o 00:14:36.230 CC lib/env_dpdk/pci_vmd.o 00:14:36.230 LIB libspdk_idxd.a 00:14:36.230 CC lib/env_dpdk/pci_idxd.o 00:14:36.230 CC lib/env_dpdk/pci_event.o 00:14:36.230 CC lib/env_dpdk/sigbus_handler.o 00:14:36.230 CC lib/env_dpdk/pci_dpdk.o 00:14:36.489 LIB libspdk_json.a 00:14:36.489 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:36.489 CC lib/env_dpdk/pci_dpdk_2211.o 00:14:36.489 CC lib/jsonrpc/jsonrpc_server.o 00:14:36.489 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:14:36.489 CC lib/jsonrpc/jsonrpc_client.o 00:14:36.489 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:14:36.749 LIB libspdk_jsonrpc.a 00:14:36.749 LIB libspdk_env_dpdk.a 00:14:36.749 CC lib/rpc/rpc.o 00:14:37.009 LIB libspdk_rpc.a 00:14:37.009 CC lib/trace/trace.o 00:14:37.009 CC lib/trace/trace_flags.o 00:14:37.009 CC lib/trace/trace_rpc.o 00:14:37.009 CC lib/keyring/keyring_rpc.o 00:14:37.009 CC lib/keyring/keyring.o 00:14:37.009 CC lib/notify/notify.o 00:14:37.009 CC lib/notify/notify_rpc.o 00:14:37.268 LIB libspdk_notify.a 00:14:37.269 LIB libspdk_keyring.a 00:14:37.269 LIB libspdk_trace.a 00:14:37.269 CC lib/sock/sock_rpc.o 00:14:37.269 CC lib/sock/sock.o 00:14:37.269 CC lib/thread/thread.o 00:14:37.269 CC lib/thread/iobuf.o 00:14:37.269 LIB libspdk_trace_parser.a 00:14:37.529 LIB libspdk_sock.a 00:14:37.788 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:37.788 CC lib/nvme/nvme_ctrlr.o 00:14:37.788 LIB libspdk_thread.a 00:14:37.788 CC lib/nvme/nvme_fabric.o 00:14:37.788 CC lib/nvme/nvme_ns_cmd.o 00:14:37.788 CC lib/nvme/nvme_pcie_common.o 00:14:37.788 CC lib/nvme/nvme_ns.o 00:14:37.788 CC lib/nvme/nvme_pcie.o 00:14:37.788 CC lib/nvme/nvme_qpair.o 00:14:37.788 CC lib/nvme/nvme.o 00:14:37.788 CC lib/nvme/nvme_quirks.o 00:14:38.049 CC lib/accel/accel.o 00:14:38.049 CC lib/accel/accel_rpc.o 00:14:38.049 CC lib/blob/blobstore.o 00:14:38.308 CC lib/blob/request.o 00:14:38.308 CC lib/blob/zeroes.o 00:14:38.308 CC lib/blob/blob_bs_dev.o 00:14:38.308 CC lib/accel/accel_sw.o 00:14:38.308 CC lib/init/json_config.o 00:14:38.308 CC lib/nvme/nvme_transport.o 00:14:38.308 CC lib/init/subsystem.o 00:14:38.308 CC lib/nvme/nvme_discovery.o 00:14:38.308 CC lib/init/subsystem_rpc.o 00:14:38.308 CC lib/init/rpc.o 00:14:38.308 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:38.308 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:38.308 CC lib/nvme/nvme_tcp.o 00:14:38.308 CC lib/nvme/nvme_opal.o 00:14:38.308 CC lib/nvme/nvme_io_msg.o 00:14:38.308 LIB libspdk_accel.a 00:14:38.308 LIB libspdk_init.a 00:14:38.308 CC lib/nvme/nvme_poll_group.o 00:14:38.308 CC lib/nvme/nvme_zns.o 00:14:38.567 CC lib/bdev/bdev.o 00:14:38.825 CC lib/bdev/bdev_rpc.o 00:14:38.825 CC lib/event/app.o 00:14:38.825 CC lib/bdev/bdev_zone.o 00:14:38.825 CC lib/event/reactor.o 00:14:38.825 CC lib/event/log_rpc.o 00:14:38.825 CC lib/bdev/part.o 00:14:38.825 CC lib/bdev/scsi_nvme.o 00:14:38.825 CC lib/event/app_rpc.o 00:14:38.825 CC lib/nvme/nvme_stubs.o 00:14:38.825 CC lib/event/scheduler_static.o 00:14:38.825 CC lib/nvme/nvme_auth.o 00:14:38.825 CC lib/nvme/nvme_rdma.o 00:14:39.083 LIB libspdk_event.a 00:14:39.083 LIB libspdk_blob.a 00:14:39.083 CC lib/lvol/lvol.o 00:14:39.083 CC lib/blobfs/tree.o 00:14:39.083 CC lib/blobfs/blobfs.o 00:14:39.083 LIB libspdk_bdev.a 00:14:39.341 CC lib/scsi/dev.o 00:14:39.341 CC lib/scsi/lun.o 00:14:39.341 CC lib/scsi/port.o 00:14:39.341 CC lib/scsi/scsi.o 00:14:39.341 CC lib/scsi/scsi_pr.o 00:14:39.341 CC lib/scsi/scsi_bdev.o 00:14:39.341 LIB libspdk_blobfs.a 00:14:39.341 LIB libspdk_lvol.a 00:14:39.341 CC lib/scsi/scsi_rpc.o 00:14:39.341 CC lib/scsi/task.o 00:14:39.341 LIB libspdk_scsi.a 00:14:39.600 CC lib/iscsi/conn.o 00:14:39.600 CC lib/iscsi/init_grp.o 00:14:39.600 CC lib/iscsi/iscsi.o 00:14:39.600 CC lib/iscsi/param.o 00:14:39.600 CC lib/iscsi/md5.o 00:14:39.600 CC lib/iscsi/tgt_node.o 00:14:39.600 CC lib/iscsi/portal_grp.o 00:14:39.600 CC lib/iscsi/iscsi_subsystem.o 00:14:39.600 CC lib/iscsi/iscsi_rpc.o 00:14:39.600 LIB libspdk_nvme.a 00:14:39.600 CC lib/iscsi/task.o 00:14:39.858 CC lib/nvmf/ctrlr.o 00:14:39.858 CC lib/nvmf/ctrlr_bdev.o 00:14:39.858 CC lib/nvmf/ctrlr_discovery.o 00:14:39.858 CC lib/nvmf/subsystem.o 00:14:39.858 CC lib/nvmf/nvmf.o 00:14:39.858 CC lib/nvmf/transport.o 00:14:39.858 CC lib/nvmf/nvmf_rpc.o 00:14:39.858 CC lib/nvmf/tcp.o 00:14:39.858 CC lib/nvmf/stubs.o 00:14:39.858 CC lib/nvmf/mdns_server.o 00:14:39.858 CC lib/nvmf/rdma.o 00:14:39.858 CC lib/nvmf/auth.o 00:14:39.858 LIB libspdk_iscsi.a 00:14:40.424 LIB libspdk_nvmf.a 00:14:40.424 CC module/env_dpdk/env_dpdk_rpc.o 00:14:40.424 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:40.424 CC module/accel/error/accel_error_rpc.o 00:14:40.424 CC module/accel/error/accel_error.o 00:14:40.424 CC module/keyring/file/keyring.o 00:14:40.424 CC module/accel/ioat/accel_ioat.o 00:14:40.424 CC module/sock/posix/posix.o 00:14:40.424 CC module/accel/dsa/accel_dsa.o 00:14:40.424 CC module/blob/bdev/blob_bdev.o 00:14:40.424 CC module/accel/iaa/accel_iaa.o 00:14:40.424 LIB libspdk_env_dpdk_rpc.a 00:14:40.682 CC module/accel/ioat/accel_ioat_rpc.o 00:14:40.682 CC module/keyring/file/keyring_rpc.o 00:14:40.682 CC module/accel/iaa/accel_iaa_rpc.o 00:14:40.682 LIB libspdk_scheduler_dynamic.a 00:14:40.682 CC module/accel/dsa/accel_dsa_rpc.o 00:14:40.682 LIB libspdk_accel_error.a 00:14:40.682 LIB libspdk_accel_ioat.a 00:14:40.682 LIB libspdk_blob_bdev.a 00:14:40.682 LIB libspdk_keyring_file.a 00:14:40.682 LIB libspdk_accel_iaa.a 00:14:40.682 LIB libspdk_accel_dsa.a 00:14:40.682 CC module/bdev/error/vbdev_error.o 00:14:40.682 CC module/blobfs/bdev/blobfs_bdev.o 00:14:40.682 CC module/bdev/null/bdev_null.o 00:14:40.682 CC module/bdev/malloc/bdev_malloc.o 00:14:40.682 CC module/bdev/lvol/vbdev_lvol.o 00:14:40.682 CC module/bdev/delay/vbdev_delay.o 00:14:40.682 CC module/bdev/passthru/vbdev_passthru.o 00:14:40.682 LIB libspdk_sock_posix.a 00:14:40.682 CC module/bdev/nvme/bdev_nvme.o 00:14:40.682 CC module/bdev/gpt/gpt.o 00:14:40.682 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:40.941 CC module/bdev/gpt/vbdev_gpt.o 00:14:40.942 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:40.942 CC module/bdev/error/vbdev_error_rpc.o 00:14:40.942 CC module/bdev/null/bdev_null_rpc.o 00:14:40.942 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:40.942 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:40.942 LIB libspdk_bdev_passthru.a 00:14:40.942 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:40.942 LIB libspdk_bdev_error.a 00:14:40.942 CC module/bdev/raid/bdev_raid.o 00:14:40.942 LIB libspdk_blobfs_bdev.a 00:14:40.942 CC module/bdev/raid/bdev_raid_rpc.o 00:14:40.942 LIB libspdk_bdev_null.a 00:14:40.942 CC module/bdev/raid/bdev_raid_sb.o 00:14:40.942 LIB libspdk_bdev_malloc.a 00:14:40.942 LIB libspdk_bdev_delay.a 00:14:40.942 LIB libspdk_bdev_gpt.a 00:14:41.201 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:41.202 CC module/bdev/nvme/nvme_rpc.o 00:14:41.202 CC module/bdev/nvme/bdev_mdns_client.o 00:14:41.202 CC module/bdev/raid/raid0.o 00:14:41.202 CC module/bdev/raid/raid1.o 00:14:41.202 LIB libspdk_bdev_lvol.a 00:14:41.202 CC module/bdev/raid/concat.o 00:14:41.202 CC module/bdev/split/vbdev_split.o 00:14:41.202 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:41.202 CC module/bdev/split/vbdev_split_rpc.o 00:14:41.202 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:41.202 LIB libspdk_bdev_raid.a 00:14:41.202 CC module/bdev/aio/bdev_aio.o 00:14:41.202 CC module/bdev/aio/bdev_aio_rpc.o 00:14:41.202 LIB libspdk_bdev_split.a 00:14:41.202 LIB libspdk_bdev_zone_block.a 00:14:41.202 LIB libspdk_bdev_nvme.a 00:14:41.461 LIB libspdk_bdev_aio.a 00:14:41.731 CC module/event/subsystems/scheduler/scheduler.o 00:14:41.731 CC module/event/subsystems/iobuf/iobuf.o 00:14:41.731 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:41.731 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:41.731 CC module/event/subsystems/vmd/vmd.o 00:14:41.731 CC module/event/subsystems/sock/sock.o 00:14:41.731 CC module/event/subsystems/keyring/keyring.o 00:14:41.731 LIB libspdk_event_scheduler.a 00:14:41.731 LIB libspdk_event_vmd.a 00:14:41.731 LIB libspdk_event_sock.a 00:14:41.731 LIB libspdk_event_iobuf.a 00:14:41.731 LIB libspdk_event_keyring.a 00:14:41.989 CC module/event/subsystems/accel/accel.o 00:14:41.989 LIB libspdk_event_accel.a 00:14:42.248 CC module/event/subsystems/bdev/bdev.o 00:14:42.249 LIB libspdk_event_bdev.a 00:14:42.508 CC module/event/subsystems/scsi/scsi.o 00:14:42.508 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:42.508 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:42.508 LIB libspdk_event_scsi.a 00:14:42.508 LIB libspdk_event_nvmf.a 00:14:42.767 CC module/event/subsystems/iscsi/iscsi.o 00:14:42.767 LIB libspdk_event_iscsi.a 00:14:43.026 CC app/trace_record/trace_record.o 00:14:43.026 TEST_HEADER include/spdk/accel.h 00:14:43.026 TEST_HEADER include/spdk/accel_module.h 00:14:43.026 TEST_HEADER include/spdk/assert.h 00:14:43.026 TEST_HEADER include/spdk/barrier.h 00:14:43.026 TEST_HEADER include/spdk/base64.h 00:14:43.026 TEST_HEADER include/spdk/bdev.h 00:14:43.026 CXX app/trace/trace.o 00:14:43.026 TEST_HEADER include/spdk/bdev_module.h 00:14:43.026 TEST_HEADER include/spdk/bdev_zone.h 00:14:43.026 TEST_HEADER include/spdk/bit_array.h 00:14:43.026 TEST_HEADER include/spdk/bit_pool.h 00:14:43.026 TEST_HEADER include/spdk/blob.h 00:14:43.026 TEST_HEADER include/spdk/blob_bdev.h 00:14:43.026 TEST_HEADER include/spdk/blobfs.h 00:14:43.026 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:43.026 TEST_HEADER include/spdk/conf.h 00:14:43.026 TEST_HEADER include/spdk/config.h 00:14:43.026 TEST_HEADER include/spdk/cpuset.h 00:14:43.026 TEST_HEADER include/spdk/crc16.h 00:14:43.026 CC app/iscsi_tgt/iscsi_tgt.o 00:14:43.026 TEST_HEADER include/spdk/crc32.h 00:14:43.026 TEST_HEADER include/spdk/crc64.h 00:14:43.026 TEST_HEADER include/spdk/dif.h 00:14:43.026 TEST_HEADER include/spdk/dma.h 00:14:43.026 TEST_HEADER include/spdk/endian.h 00:14:43.026 CC app/nvmf_tgt/nvmf_main.o 00:14:43.026 TEST_HEADER include/spdk/env.h 00:14:43.026 TEST_HEADER include/spdk/env_dpdk.h 00:14:43.026 TEST_HEADER include/spdk/event.h 00:14:43.026 TEST_HEADER include/spdk/fd.h 00:14:43.026 TEST_HEADER include/spdk/fd_group.h 00:14:43.026 TEST_HEADER include/spdk/file.h 00:14:43.026 CC test/thread/poller_perf/poller_perf.o 00:14:43.026 TEST_HEADER include/spdk/ftl.h 00:14:43.026 TEST_HEADER include/spdk/gpt_spec.h 00:14:43.026 CC examples/util/zipf/zipf.o 00:14:43.026 TEST_HEADER include/spdk/hexlify.h 00:14:43.026 TEST_HEADER include/spdk/histogram_data.h 00:14:43.026 TEST_HEADER include/spdk/idxd.h 00:14:43.026 TEST_HEADER include/spdk/idxd_spec.h 00:14:43.026 TEST_HEADER include/spdk/init.h 00:14:43.026 TEST_HEADER include/spdk/ioat.h 00:14:43.026 TEST_HEADER include/spdk/ioat_spec.h 00:14:43.026 TEST_HEADER include/spdk/iscsi_spec.h 00:14:43.026 TEST_HEADER include/spdk/json.h 00:14:43.026 TEST_HEADER include/spdk/jsonrpc.h 00:14:43.026 TEST_HEADER include/spdk/keyring.h 00:14:43.026 TEST_HEADER include/spdk/keyring_module.h 00:14:43.026 TEST_HEADER include/spdk/likely.h 00:14:43.026 TEST_HEADER include/spdk/log.h 00:14:43.026 TEST_HEADER include/spdk/lvol.h 00:14:43.026 TEST_HEADER include/spdk/memory.h 00:14:43.026 TEST_HEADER include/spdk/mmio.h 00:14:43.026 TEST_HEADER include/spdk/nbd.h 00:14:43.026 TEST_HEADER include/spdk/notify.h 00:14:43.026 TEST_HEADER include/spdk/nvme.h 00:14:43.026 TEST_HEADER include/spdk/nvme_intel.h 00:14:43.026 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:43.026 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:43.026 CC test/app/bdev_svc/bdev_svc.o 00:14:43.026 TEST_HEADER include/spdk/nvme_spec.h 00:14:43.026 TEST_HEADER include/spdk/nvme_zns.h 00:14:43.026 TEST_HEADER include/spdk/nvmf.h 00:14:43.026 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:43.026 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:43.026 TEST_HEADER include/spdk/nvmf_spec.h 00:14:43.026 TEST_HEADER include/spdk/nvmf_transport.h 00:14:43.026 TEST_HEADER include/spdk/opal.h 00:14:43.026 TEST_HEADER include/spdk/opal_spec.h 00:14:43.026 TEST_HEADER include/spdk/pci_ids.h 00:14:43.026 TEST_HEADER include/spdk/pipe.h 00:14:43.026 TEST_HEADER include/spdk/queue.h 00:14:43.026 TEST_HEADER include/spdk/reduce.h 00:14:43.026 LINK spdk_trace_record 00:14:43.026 TEST_HEADER include/spdk/rpc.h 00:14:43.026 TEST_HEADER include/spdk/scheduler.h 00:14:43.026 CC test/env/mem_callbacks/mem_callbacks.o 00:14:43.026 TEST_HEADER include/spdk/scsi.h 00:14:43.026 TEST_HEADER include/spdk/scsi_spec.h 00:14:43.026 TEST_HEADER include/spdk/sock.h 00:14:43.026 TEST_HEADER include/spdk/stdinc.h 00:14:43.026 TEST_HEADER include/spdk/string.h 00:14:43.026 TEST_HEADER include/spdk/thread.h 00:14:43.026 TEST_HEADER include/spdk/trace.h 00:14:43.026 TEST_HEADER include/spdk/trace_parser.h 00:14:43.026 TEST_HEADER include/spdk/tree.h 00:14:43.026 CC test/dma/test_dma/test_dma.o 00:14:43.026 TEST_HEADER include/spdk/ublk.h 00:14:43.026 TEST_HEADER include/spdk/util.h 00:14:43.026 TEST_HEADER include/spdk/uuid.h 00:14:43.026 TEST_HEADER include/spdk/version.h 00:14:43.026 LINK zipf 00:14:43.026 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:43.026 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:43.026 TEST_HEADER include/spdk/vhost.h 00:14:43.026 LINK poller_perf 00:14:43.026 TEST_HEADER include/spdk/vmd.h 00:14:43.026 TEST_HEADER include/spdk/xor.h 00:14:43.026 TEST_HEADER include/spdk/zipf.h 00:14:43.026 CXX test/cpp_headers/accel.o 00:14:43.026 LINK nvmf_tgt 00:14:43.026 LINK iscsi_tgt 00:14:43.026 LINK bdev_svc 00:14:43.026 CC test/env/vtophys/vtophys.o 00:14:43.026 CC test/thread/lock/spdk_lock.o 00:14:43.284 LINK test_dma 00:14:43.284 CXX test/cpp_headers/accel_module.o 00:14:43.284 CC examples/ioat/perf/perf.o 00:14:43.284 LINK vtophys 00:14:43.284 CC test/rpc_client/rpc_client_test.o 00:14:43.284 LINK ioat_perf 00:14:43.284 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:43.284 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:43.284 CXX test/cpp_headers/assert.o 00:14:43.284 LINK env_dpdk_post_init 00:14:43.284 LINK rpc_client_test 00:14:43.284 CC test/env/memory/memory_ut.o 00:14:43.284 CC examples/ioat/verify/verify.o 00:14:43.543 LINK spdk_lock 00:14:43.543 CXX test/cpp_headers/barrier.o 00:14:43.543 CC app/spdk_tgt/spdk_tgt.o 00:14:43.543 LINK verify 00:14:43.543 LINK mem_callbacks 00:14:43.543 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:43.543 CC test/app/histogram_perf/histogram_perf.o 00:14:43.543 CC test/env/pci/pci_ut.o 00:14:43.543 LINK nvme_fuzz 00:14:43.543 CC app/spdk_lspci/spdk_lspci.o 00:14:43.543 LINK histogram_perf 00:14:43.543 CXX test/cpp_headers/base64.o 00:14:43.543 CC examples/vmd/lsvmd/lsvmd.o 00:14:43.543 LINK spdk_lspci 00:14:43.543 LINK spdk_tgt 00:14:43.802 LINK spdk_trace 00:14:43.802 CXX test/cpp_headers/bdev.o 00:14:43.802 LINK lsvmd 00:14:43.802 LINK pci_ut 00:14:43.802 CC examples/idxd/perf/perf.o 00:14:43.802 CC examples/vmd/led/led.o 00:14:43.802 CC app/spdk_nvme_perf/perf.o 00:14:43.802 CXX test/cpp_headers/bdev_module.o 00:14:43.802 CC test/app/jsoncat/jsoncat.o 00:14:43.802 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:14:43.802 CC test/accel/dif/dif.o 00:14:44.061 LINK idxd_perf 00:14:44.061 LINK led 00:14:44.061 LINK iscsi_fuzz 00:14:44.061 LINK histogram_ut 00:14:44.061 CC test/unit/lib/log/log.c/log_ut.o 00:14:44.061 LINK jsoncat 00:14:44.061 CXX test/cpp_headers/bdev_zone.o 00:14:44.061 CC test/app/stub/stub.o 00:14:44.061 LINK memory_ut 00:14:44.061 LINK dif 00:14:44.061 LINK spdk_nvme_perf 00:14:44.061 LINK log_ut 00:14:44.061 CC test/blobfs/mkfs/mkfs.o 00:14:44.320 LINK stub 00:14:44.320 CC examples/thread/thread/thread_ex.o 00:14:44.320 CC test/unit/lib/rdma/common.c/common_ut.o 00:14:44.320 CXX test/cpp_headers/bit_array.o 00:14:44.320 gmake[2]: Nothing to be done for 'all'. 00:14:44.320 CC test/event/event_perf/event_perf.o 00:14:44.320 CC examples/sock/hello_world/hello_sock.o 00:14:44.320 CC app/spdk_nvme_identify/identify.o 00:14:44.320 LINK mkfs 00:14:44.320 CC app/spdk_nvme_discover/discovery_aer.o 00:14:44.320 LINK event_perf 00:14:44.320 CC test/event/reactor/reactor.o 00:14:44.320 CXX test/cpp_headers/bit_pool.o 00:14:44.320 LINK hello_sock 00:14:44.320 CXX test/cpp_headers/blob.o 00:14:44.320 CC test/unit/lib/util/base64.c/base64_ut.o 00:14:44.320 LINK thread 00:14:44.320 LINK spdk_nvme_discover 00:14:44.580 LINK reactor 00:14:44.580 CXX test/cpp_headers/blob_bdev.o 00:14:44.580 LINK base64_ut 00:14:44.580 CC test/event/reactor_perf/reactor_perf.o 00:14:44.580 LINK spdk_nvme_identify 00:14:44.580 CC test/unit/lib/dma/dma.c/dma_ut.o 00:14:44.580 LINK common_ut 00:14:44.580 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:14:44.580 CC app/spdk_top/spdk_top.o 00:14:44.580 CC test/nvme/aer/aer.o 00:14:44.580 CC test/nvme/reset/reset.o 00:14:44.580 LINK reactor_perf 00:14:44.580 CC examples/nvme/hello_world/hello_world.o 00:14:44.580 CC app/fio/nvme/fio_plugin.o 00:14:44.580 CC test/nvme/sgl/sgl.o 00:14:44.841 CXX test/cpp_headers/blobfs.o 00:14:44.841 CC examples/nvme/reconnect/reconnect.o 00:14:44.841 LINK sgl 00:14:44.841 LINK reset 00:14:44.841 LINK hello_world 00:14:44.841 LINK aer 00:14:44.841 LINK bit_array_ut 00:14:44.841 LINK dma_ut 00:14:44.841 CXX test/cpp_headers/blobfs_bdev.o 00:14:44.841 LINK reconnect 00:14:44.841 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:14:44.841 struct spdk_nvme_fdp_ruhs ruhs; 00:14:44.841 ^ 00:14:44.841 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:14:44.841 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:14:44.841 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:14:45.160 1 warning generated. 00:14:45.160 LINK spdk_nvme 00:14:45.160 CC test/nvme/e2edp/nvme_dp.o 00:14:45.160 LINK spdk_top 00:14:45.160 CC app/fio/bdev/fio_plugin.o 00:14:45.160 LINK crc16_ut 00:14:45.160 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:45.160 LINK cpuset_ut 00:14:45.160 CC examples/nvme/arbitration/arbitration.o 00:14:45.160 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:14:45.160 CC examples/nvme/hotplug/hotplug.o 00:14:45.160 CC test/bdev/bdevio/bdevio.o 00:14:45.160 LINK ioat_ut 00:14:45.160 CC test/nvme/overhead/overhead.o 00:14:45.160 CXX test/cpp_headers/conf.o 00:14:45.160 LINK nvme_dp 00:14:45.160 LINK crc32_ieee_ut 00:14:45.160 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:45.160 LINK hotplug 00:14:45.160 LINK nvme_manage 00:14:45.160 LINK arbitration 00:14:45.505 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:14:45.505 LINK overhead 00:14:45.505 LINK spdk_bdev 00:14:45.505 CXX test/cpp_headers/config.o 00:14:45.505 CXX test/cpp_headers/cpuset.o 00:14:45.505 LINK crc32c_ut 00:14:45.505 CC examples/accel/perf/accel_perf.o 00:14:45.505 CC test/nvme/err_injection/err_injection.o 00:14:45.505 LINK cmb_copy 00:14:45.505 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:14:45.505 LINK bdevio 00:14:45.505 CC test/unit/lib/util/dif.c/dif_ut.o 00:14:45.505 CC examples/blob/hello_world/hello_blob.o 00:14:45.505 CC test/nvme/startup/startup.o 00:14:45.505 CC examples/nvme/abort/abort.o 00:14:45.505 LINK accel_perf 00:14:45.505 LINK err_injection 00:14:45.505 LINK crc64_ut 00:14:45.505 CXX test/cpp_headers/crc16.o 00:14:45.505 CC test/nvme/reserve/reserve.o 00:14:45.505 CXX test/cpp_headers/crc32.o 00:14:45.505 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:45.505 LINK startup 00:14:45.843 LINK abort 00:14:45.843 CC examples/blob/cli/blobcli.o 00:14:45.843 LINK hello_blob 00:14:45.843 CXX test/cpp_headers/crc64.o 00:14:45.843 CC examples/bdev/hello_world/hello_bdev.o 00:14:45.843 LINK pmr_persistence 00:14:45.843 LINK reserve 00:14:45.843 CC test/unit/lib/util/iov.c/iov_ut.o 00:14:45.843 CXX test/cpp_headers/dif.o 00:14:45.843 CXX test/cpp_headers/dma.o 00:14:45.843 CC examples/bdev/bdevperf/bdevperf.o 00:14:45.843 LINK blobcli 00:14:45.843 LINK hello_bdev 00:14:45.843 CC test/nvme/simple_copy/simple_copy.o 00:14:45.843 LINK dif_ut 00:14:45.843 LINK iov_ut 00:14:45.843 CXX test/cpp_headers/endian.o 00:14:45.843 CC test/nvme/connect_stress/connect_stress.o 00:14:45.843 CC test/unit/lib/util/math.c/math_ut.o 00:14:46.102 CXX test/cpp_headers/env.o 00:14:46.102 LINK simple_copy 00:14:46.102 LINK connect_stress 00:14:46.102 CC test/nvme/boot_partition/boot_partition.o 00:14:46.102 CC test/nvme/compliance/nvme_compliance.o 00:14:46.102 LINK bdevperf 00:14:46.102 LINK math_ut 00:14:46.102 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:14:46.102 CXX test/cpp_headers/env_dpdk.o 00:14:46.102 CC test/unit/lib/util/string.c/string_ut.o 00:14:46.102 CXX test/cpp_headers/event.o 00:14:46.102 CC test/unit/lib/util/xor.c/xor_ut.o 00:14:46.102 CC test/nvme/fused_ordering/fused_ordering.o 00:14:46.102 LINK boot_partition 00:14:46.102 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:46.102 LINK nvme_compliance 00:14:46.102 LINK string_ut 00:14:46.360 CXX test/cpp_headers/fd.o 00:14:46.360 LINK fused_ordering 00:14:46.360 CXX test/cpp_headers/fd_group.o 00:14:46.360 CXX test/cpp_headers/file.o 00:14:46.360 CC test/nvme/fdp/fdp.o 00:14:46.360 CXX test/cpp_headers/ftl.o 00:14:46.360 LINK pipe_ut 00:14:46.360 LINK doorbell_aers 00:14:46.360 LINK xor_ut 00:14:46.360 CXX test/cpp_headers/gpt_spec.o 00:14:46.360 CC examples/nvmf/nvmf/nvmf.o 00:14:46.360 CXX test/cpp_headers/hexlify.o 00:14:46.360 LINK fdp 00:14:46.360 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:14:46.619 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:14:46.619 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:14:46.619 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:14:46.619 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:14:46.619 CXX test/cpp_headers/histogram_data.o 00:14:46.619 CXX test/cpp_headers/idxd.o 00:14:46.619 CXX test/cpp_headers/idxd_spec.o 00:14:46.619 LINK nvmf 00:14:46.619 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:14:46.619 CXX test/cpp_headers/init.o 00:14:46.619 LINK pci_event_ut 00:14:46.619 CXX test/cpp_headers/ioat.o 00:14:46.877 CXX test/cpp_headers/ioat_spec.o 00:14:46.877 LINK json_util_ut 00:14:46.877 CXX test/cpp_headers/iscsi_spec.o 00:14:46.877 CXX test/cpp_headers/json.o 00:14:46.877 CXX test/cpp_headers/jsonrpc.o 00:14:46.877 LINK idxd_user_ut 00:14:46.877 CXX test/cpp_headers/keyring.o 00:14:46.877 CXX test/cpp_headers/keyring_module.o 00:14:46.877 CXX test/cpp_headers/likely.o 00:14:46.877 CXX test/cpp_headers/log.o 00:14:46.877 CXX test/cpp_headers/lvol.o 00:14:46.877 CXX test/cpp_headers/mmio.o 00:14:46.877 CXX test/cpp_headers/memory.o 00:14:46.877 CXX test/cpp_headers/nbd.o 00:14:46.877 CXX test/cpp_headers/notify.o 00:14:47.159 CXX test/cpp_headers/nvme.o 00:14:47.159 LINK idxd_ut 00:14:47.159 CXX test/cpp_headers/nvme_intel.o 00:14:47.159 CXX test/cpp_headers/nvme_ocssd.o 00:14:47.159 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:47.159 CXX test/cpp_headers/nvme_spec.o 00:14:47.159 CXX test/cpp_headers/nvme_zns.o 00:14:47.159 CXX test/cpp_headers/nvmf.o 00:14:47.159 LINK json_write_ut 00:14:47.159 CXX test/cpp_headers/nvmf_cmd.o 00:14:47.159 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:47.159 CXX test/cpp_headers/nvmf_spec.o 00:14:47.159 CXX test/cpp_headers/nvmf_transport.o 00:14:47.159 CXX test/cpp_headers/opal.o 00:14:47.159 LINK json_parse_ut 00:14:47.159 CXX test/cpp_headers/opal_spec.o 00:14:47.159 CXX test/cpp_headers/pci_ids.o 00:14:47.159 CXX test/cpp_headers/pipe.o 00:14:47.418 CXX test/cpp_headers/queue.o 00:14:47.418 CXX test/cpp_headers/reduce.o 00:14:47.418 CXX test/cpp_headers/rpc.o 00:14:47.418 CXX test/cpp_headers/scheduler.o 00:14:47.418 CXX test/cpp_headers/scsi.o 00:14:47.418 CXX test/cpp_headers/scsi_spec.o 00:14:47.418 CXX test/cpp_headers/sock.o 00:14:47.418 CXX test/cpp_headers/stdinc.o 00:14:47.418 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:14:47.418 CXX test/cpp_headers/string.o 00:14:47.418 CXX test/cpp_headers/thread.o 00:14:47.418 CXX test/cpp_headers/trace.o 00:14:47.418 CXX test/cpp_headers/trace_parser.o 00:14:47.418 CXX test/cpp_headers/tree.o 00:14:47.418 CXX test/cpp_headers/ublk.o 00:14:47.418 CXX test/cpp_headers/util.o 00:14:47.418 CXX test/cpp_headers/uuid.o 00:14:47.418 CXX test/cpp_headers/version.o 00:14:47.677 CXX test/cpp_headers/vfio_user_pci.o 00:14:47.677 CXX test/cpp_headers/vfio_user_spec.o 00:14:47.677 CXX test/cpp_headers/vhost.o 00:14:47.677 CXX test/cpp_headers/vmd.o 00:14:47.677 LINK jsonrpc_server_ut 00:14:47.677 CXX test/cpp_headers/xor.o 00:14:47.677 CXX test/cpp_headers/zipf.o 00:14:47.935 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:14:48.192 LINK rpc_ut 00:14:48.192 CC test/unit/lib/thread/thread.c/thread_ut.o 00:14:48.192 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:14:48.192 CC test/unit/lib/sock/sock.c/sock_ut.o 00:14:48.192 CC test/unit/lib/sock/posix.c/posix_ut.o 00:14:48.450 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:14:48.450 CC test/unit/lib/notify/notify.c/notify_ut.o 00:14:48.450 LINK keyring_ut 00:14:48.450 LINK iobuf_ut 00:14:48.450 LINK notify_ut 00:14:48.709 LINK posix_ut 00:14:48.709 LINK thread_ut 00:14:48.709 LINK sock_ut 00:14:48.967 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:14:48.967 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:14:48.967 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:14:48.967 CC test/unit/lib/accel/accel.c/accel_ut.o 00:14:48.967 CC test/unit/lib/blob/blob.c/blob_ut.o 00:14:48.967 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:14:48.967 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:14:48.967 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:14:48.967 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:14:48.967 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:14:48.967 LINK rpc_ut 00:14:48.967 LINK subsystem_ut 00:14:48.967 LINK blob_bdev_ut 00:14:49.227 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:14:49.227 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:14:49.227 CC test/unit/lib/event/app.c/app_ut.o 00:14:49.486 LINK app_ut 00:14:49.486 LINK accel_ut 00:14:49.486 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:14:49.486 LINK nvme_ctrlr_cmd_ut 00:14:49.486 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:14:49.486 LINK nvme_ns_ut 00:14:49.750 LINK nvme_ut 00:14:49.750 LINK nvme_ctrlr_ocssd_cmd_ut 00:14:49.750 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:14:49.750 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:14:49.750 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:14:49.750 CC test/unit/lib/bdev/part.c/part_ut.o 00:14:49.750 LINK nvme_ns_ocssd_cmd_ut 00:14:49.750 LINK scsi_nvme_ut 00:14:49.750 LINK reactor_ut 00:14:49.750 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:14:50.009 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:14:50.009 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:14:50.009 LINK nvme_ns_cmd_ut 00:14:50.009 LINK nvme_ctrlr_ut 00:14:50.009 LINK gpt_ut 00:14:50.009 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:14:50.009 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:14:50.266 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:14:50.266 LINK nvme_poll_group_ut 00:14:50.266 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:14:50.525 LINK nvme_quirks_ut 00:14:50.525 LINK part_ut 00:14:50.525 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:14:50.525 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:14:50.525 LINK vbdev_lvol_ut 00:14:50.525 LINK blob_ut 00:14:50.525 LINK nvme_pcie_ut 00:14:50.525 LINK bdev_zone_ut 00:14:50.525 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:14:50.809 LINK nvme_qpair_ut 00:14:50.809 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:14:50.809 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:14:50.809 LINK bdev_raid_sb_ut 00:14:50.809 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:14:50.809 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:14:50.809 LINK bdev_raid_ut 00:14:50.809 LINK tree_ut 00:14:50.809 LINK concat_ut 00:14:50.809 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:14:50.809 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:14:50.809 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:14:51.067 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:14:51.067 LINK raid0_ut 00:14:51.067 LINK bdev_ut 00:14:51.067 LINK vbdev_zone_block_ut 00:14:51.067 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:14:51.067 LINK raid1_ut 00:14:51.067 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:14:51.067 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:14:51.067 LINK nvme_tcp_ut 00:14:51.326 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:14:51.326 LINK bdev_ut 00:14:51.326 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:14:51.326 LINK blobfs_async_ut 00:14:51.326 LINK blobfs_bdev_ut 00:14:51.326 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:14:51.326 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:14:51.326 LINK blobfs_sync_ut 00:14:51.586 LINK nvme_transport_ut 00:14:51.586 LINK nvme_opal_ut 00:14:51.586 LINK nvme_io_msg_ut 00:14:51.846 LINK nvme_fabric_ut 00:14:51.846 LINK nvme_pcie_common_ut 00:14:52.106 LINK lvol_ut 00:14:52.366 LINK nvme_rdma_ut 00:14:52.366 LINK bdev_nvme_ut 00:14:52.624 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:14:52.624 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:14:52.624 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:14:52.624 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:14:52.624 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:14:52.624 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:14:52.624 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:14:52.624 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:14:52.624 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:14:52.624 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:14:52.882 LINK dev_ut 00:14:52.882 LINK scsi_ut 00:14:52.882 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:14:52.882 LINK scsi_pr_ut 00:14:52.882 LINK scsi_bdev_ut 00:14:52.882 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:14:52.882 LINK lun_ut 00:14:52.882 LINK ctrlr_bdev_ut 00:14:53.140 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:14:53.140 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:14:53.141 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:14:53.141 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:14:53.141 LINK ctrlr_discovery_ut 00:14:53.400 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:14:53.400 LINK subsystem_ut 00:14:53.400 LINK init_grp_ut 00:14:53.400 LINK nvmf_ut 00:14:53.400 LINK auth_ut 00:14:53.400 CC test/unit/lib/iscsi/param.c/param_ut.o 00:14:53.658 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:14:53.658 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:14:53.658 LINK ctrlr_ut 00:14:53.658 LINK conn_ut 00:14:53.658 LINK param_ut 00:14:53.916 LINK portal_grp_ut 00:14:53.916 LINK rdma_ut 00:14:53.916 LINK tcp_ut 00:14:53.916 LINK tgt_node_ut 00:14:53.916 LINK transport_ut 00:14:54.174 LINK iscsi_ut 00:14:54.174 00:14:54.174 real 1m12.692s 00:14:54.174 user 4m44.030s 00:14:54.174 sys 1m2.820s 00:14:54.174 09:41:22 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:14:54.174 09:41:22 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:14:54.174 ************************************ 00:14:54.174 END TEST unittest_build 00:14:54.174 ************************************ 00:14:54.174 09:41:22 -- common/autotest_common.sh@1142 -- $ return 0 00:14:54.174 09:41:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:54.174 09:41:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:54.174 09:41:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:54.174 09:41:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:54.174 09:41:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:54.174 09:41:22 -- pm/common@44 -- $ pid=1268 00:14:54.174 09:41:22 -- pm/common@50 -- $ kill -TERM 1268 00:14:54.432 09:41:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.432 09:41:22 -- nvmf/common.sh@7 -- # uname -s 00:14:54.432 09:41:22 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:14:54.432 09:41:22 -- nvmf/common.sh@7 -- # return 0 00:14:54.432 09:41:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:54.432 09:41:22 -- spdk/autotest.sh@32 -- # uname -s 00:14:54.432 09:41:22 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:14:54.432 09:41:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:54.432 09:41:22 -- pm/common@17 -- # local monitor 00:14:54.432 09:41:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:54.432 09:41:22 -- pm/common@25 -- # sleep 1 00:14:54.432 09:41:22 -- pm/common@21 -- # date +%s 00:14:54.432 09:41:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721036482 00:14:54.432 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721036482_collect-vmstat.pm.log 00:14:55.366 09:41:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:55.366 09:41:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:55.366 09:41:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.366 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:14:55.366 09:41:23 -- spdk/autotest.sh@59 -- # create_test_list 00:14:55.366 09:41:23 -- common/autotest_common.sh@746 -- # xtrace_disable 00:14:55.366 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:14:55.366 09:41:23 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:55.366 09:41:23 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:55.366 09:41:23 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:55.366 09:41:23 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:55.366 09:41:23 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:55.366 09:41:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:55.366 09:41:23 -- common/autotest_common.sh@1455 -- # uname 00:14:55.366 09:41:23 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:14:55.366 09:41:23 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:14:55.366 kldunload: can't find file contigmem.ko 00:14:55.366 09:41:23 -- common/autotest_common.sh@1456 -- # true 00:14:55.366 09:41:23 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:14:55.366 09:41:23 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:14:55.366 09:41:23 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:14:55.366 09:41:23 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:14:55.366 09:41:23 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:14:55.366 09:41:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:55.366 09:41:23 -- common/autotest_common.sh@1475 -- # uname 00:14:55.366 09:41:23 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:14:55.366 09:41:23 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:14:55.366 09:41:23 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:14:55.366 09:41:23 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:14:55.366 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:14:55.366 09:41:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:14:55.366 09:41:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:14:55.366 09:41:23 -- spdk/autotest.sh@72 -- # hash lcov 00:14:55.366 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:14:55.366 09:41:23 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:14:55.366 09:41:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.366 09:41:23 -- common/autotest_common.sh@10 -- # set +x 00:14:55.624 09:41:23 -- spdk/autotest.sh@91 -- # rm -f 00:14:55.624 09:41:23 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:55.624 kldunload: can't find file contigmem.ko 00:14:55.624 kldunload: can't find file nic_uio.ko 00:14:55.624 09:41:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:14:55.624 09:41:23 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:55.624 09:41:23 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:55.624 09:41:23 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:55.624 09:41:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:14:55.624 09:41:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:55.624 09:41:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:55.624 09:41:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:14:55.624 09:41:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:14:55.624 09:41:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:14:55.624 nvme0ns1 is not a block device 00:14:55.624 09:41:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:14:55.624 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:14:55.624 09:41:23 -- scripts/common.sh@391 -- # pt= 00:14:55.624 09:41:23 -- scripts/common.sh@392 -- # return 1 00:14:55.624 09:41:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:14:55.624 1+0 records in 00:14:55.624 1+0 records out 00:14:55.624 1048576 bytes transferred in 0.006544 secs (160242187 bytes/sec) 00:14:55.624 09:41:23 -- spdk/autotest.sh@118 -- # sync 00:14:56.561 09:41:24 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:56.561 09:41:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:56.561 09:41:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:57.153 09:41:25 -- spdk/autotest.sh@124 -- # uname -s 00:14:57.153 09:41:25 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:14:57.153 09:41:25 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:57.153 Contigmem (not present) 00:14:57.153 Buffer Size: not set 00:14:57.153 Num Buffers: not set 00:14:57.153 00:14:57.153 00:14:57.153 Type BDF Vendor Device Driver 00:14:57.153 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:14:57.153 09:41:25 -- spdk/autotest.sh@130 -- # uname -s 00:14:57.153 09:41:25 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:14:57.153 09:41:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:57.153 09:41:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.153 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:14:57.153 09:41:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:57.153 09:41:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.153 09:41:25 -- common/autotest_common.sh@10 -- # set +x 00:14:57.153 09:41:25 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:57.412 kldunload: can't find file nic_uio.ko 00:14:57.412 hw.nic_uio.bdfs="0:16:0" 00:14:57.412 hw.contigmem.num_buffers="8" 00:14:57.412 hw.contigmem.buffer_size="268435456" 00:14:58.353 09:41:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:58.353 09:41:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.353 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:14:58.353 09:41:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:58.353 09:41:26 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:14:58.353 09:41:26 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:14:58.353 09:41:26 -- common/autotest_common.sh@1577 -- # bdfs=() 00:14:58.353 09:41:26 -- common/autotest_common.sh@1577 -- # local bdfs 00:14:58.353 09:41:26 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:14:58.353 09:41:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:58.353 09:41:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:58.353 09:41:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:58.353 09:41:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:58.353 09:41:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:58.353 09:41:26 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:14:58.353 09:41:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:14:58.353 09:41:26 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:58.353 09:41:26 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:58.353 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:14:58.353 09:41:26 -- common/autotest_common.sh@1580 -- # device= 00:14:58.353 09:41:26 -- common/autotest_common.sh@1580 -- # true 00:14:58.353 09:41:26 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:14:58.353 09:41:26 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:14:58.353 09:41:26 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:14:58.353 09:41:26 -- common/autotest_common.sh@1593 -- # return 0 00:14:58.353 09:41:26 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:14:58.353 09:41:26 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:58.353 09:41:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:58.353 09:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.353 09:41:26 -- common/autotest_common.sh@10 -- # set +x 00:14:58.353 ************************************ 00:14:58.353 START TEST unittest 00:14:58.353 ************************************ 00:14:58.353 09:41:26 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:58.353 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:58.353 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:14:58.353 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:14:58.353 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:14:58.353 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:14:58.353 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:58.353 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:58.353 ++ rpc_py=rpc_cmd 00:14:58.353 ++ set -e 00:14:58.353 ++ shopt -s nullglob 00:14:58.353 ++ shopt -s extglob 00:14:58.353 ++ shopt -s inherit_errexit 00:14:58.353 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:58.353 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:58.353 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:58.353 +++ CONFIG_WPDK_DIR= 00:14:58.353 +++ CONFIG_ASAN=n 00:14:58.353 +++ CONFIG_VBDEV_COMPRESS=n 00:14:58.353 +++ CONFIG_HAVE_EXECINFO_H=y 00:14:58.353 +++ CONFIG_USDT=n 00:14:58.353 +++ CONFIG_CUSTOMOCF=n 00:14:58.353 +++ CONFIG_PREFIX=/usr/local 00:14:58.353 +++ CONFIG_RBD=n 00:14:58.353 +++ CONFIG_LIBDIR= 00:14:58.353 +++ CONFIG_IDXD=y 00:14:58.353 +++ CONFIG_NVME_CUSE=n 00:14:58.353 +++ CONFIG_SMA=n 00:14:58.353 +++ CONFIG_VTUNE=n 00:14:58.353 +++ CONFIG_TSAN=n 00:14:58.353 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:58.353 +++ CONFIG_VFIO_USER_DIR= 00:14:58.353 +++ CONFIG_PGO_CAPTURE=n 00:14:58.353 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:14:58.353 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:58.353 +++ CONFIG_LTO=n 00:14:58.353 +++ CONFIG_ISCSI_INITIATOR=n 00:14:58.353 +++ CONFIG_CET=n 00:14:58.353 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:58.353 +++ CONFIG_OCF_PATH= 00:14:58.353 +++ CONFIG_RDMA_SET_TOS=y 00:14:58.353 +++ CONFIG_HAVE_ARC4RANDOM=y 00:14:58.353 +++ CONFIG_HAVE_LIBARCHIVE=n 00:14:58.353 +++ CONFIG_UBLK=n 00:14:58.353 +++ CONFIG_ISAL_CRYPTO=y 00:14:58.353 +++ CONFIG_OPENSSL_PATH= 00:14:58.353 +++ CONFIG_OCF=n 00:14:58.353 +++ CONFIG_FUSE=n 00:14:58.353 +++ CONFIG_VTUNE_DIR= 00:14:58.353 +++ CONFIG_FUZZER_LIB= 00:14:58.353 +++ CONFIG_FUZZER=n 00:14:58.353 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:58.353 +++ CONFIG_CRYPTO=n 00:14:58.353 +++ CONFIG_PGO_USE=n 00:14:58.353 +++ CONFIG_VHOST=n 00:14:58.353 +++ CONFIG_DAOS=n 00:14:58.353 +++ CONFIG_DPDK_INC_DIR= 00:14:58.353 +++ CONFIG_DAOS_DIR= 00:14:58.353 +++ CONFIG_UNIT_TESTS=y 00:14:58.353 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:14:58.353 +++ CONFIG_VIRTIO=n 00:14:58.353 +++ CONFIG_DPDK_UADK=n 00:14:58.353 +++ CONFIG_COVERAGE=n 00:14:58.353 +++ CONFIG_RDMA=y 00:14:58.353 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:58.353 +++ CONFIG_URING_PATH= 00:14:58.353 +++ CONFIG_XNVME=n 00:14:58.353 +++ CONFIG_VFIO_USER=n 00:14:58.353 +++ CONFIG_ARCH=native 00:14:58.353 +++ CONFIG_HAVE_EVP_MAC=y 00:14:58.353 +++ CONFIG_URING_ZNS=n 00:14:58.353 +++ CONFIG_WERROR=y 00:14:58.353 +++ CONFIG_HAVE_LIBBSD=n 00:14:58.353 +++ CONFIG_UBSAN=n 00:14:58.353 +++ CONFIG_IPSEC_MB_DIR= 00:14:58.353 +++ CONFIG_GOLANG=n 00:14:58.353 +++ CONFIG_ISAL=y 00:14:58.353 +++ CONFIG_IDXD_KERNEL=n 00:14:58.353 +++ CONFIG_DPDK_LIB_DIR= 00:14:58.353 +++ CONFIG_RDMA_PROV=verbs 00:14:58.353 +++ CONFIG_APPS=y 00:14:58.353 +++ CONFIG_SHARED=n 00:14:58.353 +++ CONFIG_HAVE_KEYUTILS=n 00:14:58.353 +++ CONFIG_FC_PATH= 00:14:58.353 +++ CONFIG_DPDK_PKG_CONFIG=n 00:14:58.353 +++ CONFIG_FC=n 00:14:58.353 +++ CONFIG_AVAHI=n 00:14:58.353 +++ CONFIG_FIO_PLUGIN=y 00:14:58.353 +++ CONFIG_RAID5F=n 00:14:58.353 +++ CONFIG_EXAMPLES=y 00:14:58.353 +++ CONFIG_TESTS=y 00:14:58.353 +++ CONFIG_CRYPTO_MLX5=n 00:14:58.353 +++ CONFIG_MAX_LCORES=128 00:14:58.353 +++ CONFIG_IPSEC_MB=n 00:14:58.353 +++ CONFIG_PGO_DIR= 00:14:58.353 +++ CONFIG_DEBUG=y 00:14:58.353 +++ CONFIG_DPDK_COMPRESSDEV=n 00:14:58.353 +++ CONFIG_CROSS_PREFIX= 00:14:58.353 +++ CONFIG_URING=n 00:14:58.353 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:58.353 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:58.353 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:58.353 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:58.353 +++ _root=/home/vagrant/spdk_repo/spdk 00:14:58.353 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:58.353 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:58.353 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:58.353 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:58.353 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:58.353 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:58.353 +++ VHOST_APP=("$_app_dir/vhost") 00:14:58.353 +++ DD_APP=("$_app_dir/spdk_dd") 00:14:58.353 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:14:58.353 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:58.353 +++ [[ #ifndef SPDK_CONFIG_H 00:14:58.353 #define SPDK_CONFIG_H 00:14:58.353 #define SPDK_CONFIG_APPS 1 00:14:58.353 #define SPDK_CONFIG_ARCH native 00:14:58.353 #undef SPDK_CONFIG_ASAN 00:14:58.353 #undef SPDK_CONFIG_AVAHI 00:14:58.353 #undef SPDK_CONFIG_CET 00:14:58.353 #undef SPDK_CONFIG_COVERAGE 00:14:58.353 #define SPDK_CONFIG_CROSS_PREFIX 00:14:58.353 #undef SPDK_CONFIG_CRYPTO 00:14:58.353 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:58.353 #undef SPDK_CONFIG_CUSTOMOCF 00:14:58.353 #undef SPDK_CONFIG_DAOS 00:14:58.353 #define SPDK_CONFIG_DAOS_DIR 00:14:58.353 #define SPDK_CONFIG_DEBUG 1 00:14:58.353 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:58.353 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:58.353 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:58.353 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:58.353 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:58.353 #undef SPDK_CONFIG_DPDK_UADK 00:14:58.353 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:58.353 #define SPDK_CONFIG_EXAMPLES 1 00:14:58.353 #undef SPDK_CONFIG_FC 00:14:58.353 #define SPDK_CONFIG_FC_PATH 00:14:58.353 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:58.353 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:58.353 #undef SPDK_CONFIG_FUSE 00:14:58.353 #undef SPDK_CONFIG_FUZZER 00:14:58.353 #define SPDK_CONFIG_FUZZER_LIB 00:14:58.353 #undef SPDK_CONFIG_GOLANG 00:14:58.353 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:58.353 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:58.353 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:58.353 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:14:58.353 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:58.353 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:58.353 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:14:58.353 #define SPDK_CONFIG_IDXD 1 00:14:58.353 #undef SPDK_CONFIG_IDXD_KERNEL 00:14:58.353 #undef SPDK_CONFIG_IPSEC_MB 00:14:58.353 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:58.353 #define SPDK_CONFIG_ISAL 1 00:14:58.353 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:58.353 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:14:58.353 #define SPDK_CONFIG_LIBDIR 00:14:58.353 #undef SPDK_CONFIG_LTO 00:14:58.353 #define SPDK_CONFIG_MAX_LCORES 128 00:14:58.353 #undef SPDK_CONFIG_NVME_CUSE 00:14:58.353 #undef SPDK_CONFIG_OCF 00:14:58.353 #define SPDK_CONFIG_OCF_PATH 00:14:58.353 #define SPDK_CONFIG_OPENSSL_PATH 00:14:58.353 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:58.353 #define SPDK_CONFIG_PGO_DIR 00:14:58.353 #undef SPDK_CONFIG_PGO_USE 00:14:58.353 #define SPDK_CONFIG_PREFIX /usr/local 00:14:58.353 #undef SPDK_CONFIG_RAID5F 00:14:58.353 #undef SPDK_CONFIG_RBD 00:14:58.353 #define SPDK_CONFIG_RDMA 1 00:14:58.353 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:58.353 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:58.353 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:14:58.353 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:58.353 #undef SPDK_CONFIG_SHARED 00:14:58.353 #undef SPDK_CONFIG_SMA 00:14:58.353 #define SPDK_CONFIG_TESTS 1 00:14:58.353 #undef SPDK_CONFIG_TSAN 00:14:58.353 #undef SPDK_CONFIG_UBLK 00:14:58.353 #undef SPDK_CONFIG_UBSAN 00:14:58.353 #define SPDK_CONFIG_UNIT_TESTS 1 00:14:58.353 #undef SPDK_CONFIG_URING 00:14:58.353 #define SPDK_CONFIG_URING_PATH 00:14:58.353 #undef SPDK_CONFIG_URING_ZNS 00:14:58.353 #undef SPDK_CONFIG_USDT 00:14:58.354 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:58.354 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:58.354 #undef SPDK_CONFIG_VFIO_USER 00:14:58.354 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:58.354 #undef SPDK_CONFIG_VHOST 00:14:58.354 #undef SPDK_CONFIG_VIRTIO 00:14:58.354 #undef SPDK_CONFIG_VTUNE 00:14:58.354 #define SPDK_CONFIG_VTUNE_DIR 00:14:58.354 #define SPDK_CONFIG_WERROR 1 00:14:58.354 #define SPDK_CONFIG_WPDK_DIR 00:14:58.354 #undef SPDK_CONFIG_XNVME 00:14:58.354 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:58.354 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:58.354 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.354 +++ [[ -e /bin/wpdk_common.sh ]] 00:14:58.354 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.354 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.354 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:14:58.354 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:14:58.354 ++++ export PATH 00:14:58.354 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:14:58.354 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:58.354 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:58.354 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:58.354 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:58.354 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:58.354 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:58.354 +++ TEST_TAG=N/A 00:14:58.354 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:58.354 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:58.354 ++++ uname -s 00:14:58.354 +++ PM_OS=FreeBSD 00:14:58.354 +++ MONITOR_RESOURCES_SUDO=() 00:14:58.354 +++ declare -A MONITOR_RESOURCES_SUDO 00:14:58.354 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:58.354 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:58.354 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:58.354 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:58.354 +++ SUDO[0]= 00:14:58.354 +++ SUDO[1]='sudo -E' 00:14:58.354 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:58.354 +++ [[ FreeBSD == FreeBSD ]] 00:14:58.354 +++ MONITOR_RESOURCES=(collect-vmstat) 00:14:58.354 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:58.354 ++ : 0 00:14:58.354 ++ export RUN_NIGHTLY 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_RUN_VALGRIND 00:14:58.354 ++ : 1 00:14:58.354 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:14:58.354 ++ : 1 00:14:58.354 ++ export SPDK_TEST_UNITTEST 00:14:58.354 ++ : 00:14:58.354 ++ export SPDK_TEST_AUTOBUILD 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_RELEASE_BUILD 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_ISAL 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_ISCSI 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_ISCSI_INITIATOR 00:14:58.354 ++ : 1 00:14:58.354 ++ export SPDK_TEST_NVME 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVME_PMR 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVME_BP 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVME_CLI 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVME_CUSE 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVME_FDP 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVMF 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_VFIOUSER 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_VFIOUSER_QEMU 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_FUZZER 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_FUZZER_SHORT 00:14:58.354 ++ : rdma 00:14:58.354 ++ export SPDK_TEST_NVMF_TRANSPORT 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_RBD 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_VHOST 00:14:58.354 ++ : 1 00:14:58.354 ++ export SPDK_TEST_BLOCKDEV 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_IOAT 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_BLOBFS 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_VHOST_INIT 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_LVOL 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_VBDEV_COMPRESS 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_RUN_ASAN 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_RUN_UBSAN 00:14:58.354 ++ : 00:14:58.354 ++ export SPDK_RUN_EXTERNAL_DPDK 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_RUN_NON_ROOT 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_CRYPTO 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_FTL 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_OCF 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_VMD 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_OPAL 00:14:58.354 ++ : 00:14:58.354 ++ export SPDK_TEST_NATIVE_DPDK 00:14:58.354 ++ : true 00:14:58.354 ++ export SPDK_AUTOTEST_X 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_RAID5 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_URING 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_USDT 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_USE_IGB_UIO 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_SCHEDULER 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_SCANBUILD 00:14:58.354 ++ : 00:14:58.354 ++ export SPDK_TEST_NVMF_NICS 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_SMA 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_DAOS 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_XNVME 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_ACCEL_DSA 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_ACCEL_IAA 00:14:58.354 ++ : 00:14:58.354 ++ export SPDK_TEST_FUZZER_TARGET 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_TEST_NVMF_MDNS 00:14:58.354 ++ : 0 00:14:58.354 ++ export SPDK_JSONRPC_GO_CLIENT 00:14:58.354 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:58.354 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:58.354 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:58.354 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:58.354 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:58.354 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:58.354 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:58.354 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:58.354 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:58.354 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:14:58.354 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:58.354 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:58.354 ++ export PYTHONDONTWRITEBYTECODE=1 00:14:58.354 ++ PYTHONDONTWRITEBYTECODE=1 00:14:58.354 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:58.354 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:58.354 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:58.354 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:58.354 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:14:58.354 ++ rm -rf /var/tmp/asan_suppression_file 00:14:58.354 ++ cat 00:14:58.354 ++ echo leak:libfuse3.so 00:14:58.354 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:58.354 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:58.354 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:58.354 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:58.354 ++ '[' -z /var/spdk/dependencies ']' 00:14:58.354 ++ export DEPENDENCY_DIR 00:14:58.354 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:58.354 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:58.354 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:58.354 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:58.354 ++ export QEMU_BIN= 00:14:58.354 ++ QEMU_BIN= 00:14:58.354 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:14:58.354 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:14:58.354 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:58.354 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:58.354 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:58.354 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:58.354 ++ '[' 0 -eq 0 ']' 00:14:58.354 ++ export valgrind= 00:14:58.354 ++ valgrind= 00:14:58.354 +++ uname -s 00:14:58.354 ++ '[' FreeBSD = Linux ']' 00:14:58.354 +++ uname -s 00:14:58.354 ++ '[' FreeBSD = FreeBSD ']' 00:14:58.354 ++ MAKE=gmake 00:14:58.354 +++ sysctl -a 00:14:58.354 +++ grep -E -i hw.ncpu 00:14:58.354 +++ awk '{print $2}' 00:14:58.614 ++ MAKEFLAGS=-j10 00:14:58.614 ++ HUGEMEM=2048 00:14:58.614 ++ export HUGEMEM=2048 00:14:58.614 ++ HUGEMEM=2048 00:14:58.614 ++ NO_HUGE=() 00:14:58.614 ++ TEST_MODE= 00:14:58.614 ++ [[ -z '' ]] 00:14:58.614 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:14:58.614 ++ exec 00:14:58.614 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:14:58.614 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:14:58.614 ++ set_test_storage 2147483648 00:14:58.614 ++ [[ -v testdir ]] 00:14:58.614 ++ local requested_size=2147483648 00:14:58.614 ++ local mount target_dir 00:14:58.614 ++ local -A mounts fss sizes avails uses 00:14:58.614 ++ local source fs size avail mount use 00:14:58.614 ++ local storage_fallback storage_candidates 00:14:58.614 +++ mktemp -udt spdk.XXXXXX 00:14:58.614 ++ storage_fallback=/tmp/spdk.XXXXXX.Cvk3885Co5 00:14:58.614 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:58.614 ++ [[ -n '' ]] 00:14:58.614 ++ [[ -n '' ]] 00:14:58.614 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.Cvk3885Co5/tests/unit /tmp/spdk.XXXXXX.Cvk3885Co5 00:14:58.614 ++ requested_size=2214592512 00:14:58.614 ++ read -r source fs size use avail _ mount 00:14:58.614 +++ df -T 00:14:58.614 +++ grep -v Filesystem 00:14:58.614 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:14:58.614 ++ fss["$mount"]=ufs 00:14:58.614 ++ avails["$mount"]=17237311488 00:14:58.614 ++ sizes["$mount"]=31182712832 00:14:58.614 ++ uses["$mount"]=11450785792 00:14:58.614 ++ read -r source fs size use avail _ mount 00:14:58.614 ++ mounts["$mount"]=devfs 00:14:58.614 ++ fss["$mount"]=devfs 00:14:58.614 ++ avails["$mount"]=1024 00:14:58.614 ++ sizes["$mount"]=1024 00:14:58.614 ++ uses["$mount"]=0 00:14:58.614 ++ read -r source fs size use avail _ mount 00:14:58.614 ++ mounts["$mount"]=tmpfs 00:14:58.614 ++ fss["$mount"]=tmpfs 00:14:58.614 ++ avails["$mount"]=2147438592 00:14:58.614 ++ sizes["$mount"]=2147483648 00:14:58.614 ++ uses["$mount"]=45056 00:14:58.614 ++ read -r source fs size use avail _ mount 00:14:58.614 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output 00:14:58.614 ++ fss["$mount"]=fusefs.sshfs 00:14:58.614 ++ avails["$mount"]=87715000320 00:14:58.614 ++ sizes["$mount"]=105088212992 00:14:58.614 ++ uses["$mount"]=11987779584 00:14:58.614 ++ read -r source fs size use avail _ mount 00:14:58.614 ++ printf '* Looking for test storage...\n' 00:14:58.614 * Looking for test storage... 00:14:58.614 ++ local target_space new_size 00:14:58.614 ++ for target_dir in "${storage_candidates[@]}" 00:14:58.614 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:14:58.614 +++ awk '$1 !~ /Filesystem/{print $6}' 00:14:58.614 ++ mount=/ 00:14:58.614 ++ target_space=17237311488 00:14:58.614 ++ (( target_space == 0 || target_space < requested_size )) 00:14:58.614 ++ (( target_space >= requested_size )) 00:14:58.614 ++ [[ ufs == tmpfs ]] 00:14:58.614 ++ [[ ufs == ramfs ]] 00:14:58.614 ++ [[ / == / ]] 00:14:58.614 ++ new_size=13665378304 00:14:58.614 ++ (( new_size * 100 / sizes[/] > 95 )) 00:14:58.614 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:14:58.614 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:14:58.614 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:14:58.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:14:58.614 ++ return 0 00:14:58.614 ++ set -o errtrace 00:14:58.614 ++ shopt -s extdebug 00:14:58.614 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:14:58.614 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@1687 -- # true 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@29 -- # exec 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:58.614 09:41:26 unittest -- common/autotest_common.sh@18 -- # set -x 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@181 -- # hash lcov 00:14:58.615 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@208 -- # uname -m 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 ************************************ 00:14:58.615 START TEST unittest_pci_event 00:14:58.615 ************************************ 00:14:58.615 09:41:26 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:14:58.615 00:14:58.615 00:14:58.615 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.615 http://cunit.sourceforge.net/ 00:14:58.615 00:14:58.615 00:14:58.615 Suite: pci_event 00:14:58.615 Test: test_pci_parse_event ...passed 00:14:58.615 00:14:58.615 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.615 suites 1 1 n/a 0 0 00:14:58.615 tests 1 1 1 0 0 00:14:58.615 asserts 1 1 1 0 n/a 00:14:58.615 00:14:58.615 Elapsed time = 0.000 seconds 00:14:58.615 00:14:58.615 real 0m0.024s 00:14:58.615 user 0m0.007s 00:14:58.615 sys 0m0.005s 00:14:58.615 09:41:26 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.615 09:41:26 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 ************************************ 00:14:58.615 END TEST unittest_pci_event 00:14:58.615 ************************************ 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1142 -- # return 0 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 ************************************ 00:14:58.615 START TEST unittest_include 00:14:58.615 ************************************ 00:14:58.615 09:41:26 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:14:58.615 00:14:58.615 00:14:58.615 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.615 http://cunit.sourceforge.net/ 00:14:58.615 00:14:58.615 00:14:58.615 Suite: histogram 00:14:58.615 Test: histogram_test ...passed 00:14:58.615 Test: histogram_merge ...passed 00:14:58.615 00:14:58.615 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.615 suites 1 1 n/a 0 0 00:14:58.615 tests 2 2 2 0 0 00:14:58.615 asserts 50 50 50 0 n/a 00:14:58.615 00:14:58.615 Elapsed time = 0.000 seconds 00:14:58.615 00:14:58.615 real 0m0.006s 00:14:58.615 user 0m0.005s 00:14:58.615 sys 0m0.005s 00:14:58.615 09:41:26 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.615 ************************************ 00:14:58.615 END TEST unittest_include 00:14:58.615 ************************************ 00:14:58.615 09:41:26 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1142 -- # return 0 00:14:58.615 09:41:26 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.615 09:41:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:14:58.615 ************************************ 00:14:58.615 START TEST unittest_bdev 00:14:58.615 ************************************ 00:14:58.615 09:41:26 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:14:58.615 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:14:58.615 00:14:58.615 00:14:58.615 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.615 http://cunit.sourceforge.net/ 00:14:58.615 00:14:58.615 00:14:58.615 Suite: bdev 00:14:58.615 Test: bytes_to_blocks_test ...passed 00:14:58.615 Test: num_blocks_test ...passed 00:14:58.615 Test: io_valid_test ...passed 00:14:58.615 Test: open_write_test ...[2024-07-15 09:41:26.651969] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8104:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:14:58.615 [2024-07-15 09:41:26.652278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8104:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:14:58.615 [2024-07-15 09:41:26.652297] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8104:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:14:58.615 passed 00:14:58.615 Test: claim_test ...passed 00:14:58.615 Test: alias_add_del_test ...[2024-07-15 09:41:26.655732] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:14:58.615 [2024-07-15 09:41:26.655796] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:14:58.615 [2024-07-15 09:41:26.655813] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:14:58.615 passed 00:14:58.615 Test: get_device_stat_test ...passed 00:14:58.615 Test: bdev_io_types_test ...passed 00:14:58.615 Test: bdev_io_wait_test ...passed 00:14:58.615 Test: bdev_io_spans_split_test ...passed 00:14:58.615 Test: bdev_io_boundary_split_test ...passed 00:14:58.615 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-15 09:41:26.663360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:14:58.615 passed 00:14:58.615 Test: bdev_io_mix_split_test ...passed 00:14:58.615 Test: bdev_io_split_with_io_wait ...passed 00:14:58.615 Test: bdev_io_write_unit_split_test ...[2024-07-15 09:41:26.668879] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:14:58.615 [2024-07-15 09:41:26.668962] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:14:58.615 [2024-07-15 09:41:26.668980] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:14:58.615 [2024-07-15 09:41:26.668996] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:14:58.615 passed 00:14:58.615 Test: bdev_io_alignment_with_boundary ...passed 00:14:58.615 Test: bdev_io_alignment ...passed 00:14:58.615 Test: bdev_histograms ...passed 00:14:58.615 Test: bdev_write_zeroes ...passed 00:14:58.615 Test: bdev_compare_and_write ...passed 00:14:58.615 Test: bdev_compare ...passed 00:14:58.615 Test: bdev_compare_emulated ...passed 00:14:58.615 Test: bdev_zcopy_write ...passed 00:14:58.615 Test: bdev_zcopy_read ...passed 00:14:58.615 Test: bdev_open_while_hotremove ...passed 00:14:58.615 Test: bdev_close_while_hotremove ...passed 00:14:58.615 Test: bdev_open_ext_test ...passed 00:14:58.615 Test: bdev_open_ext_unregister ...[2024-07-15 09:41:26.689633] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8210:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:14:58.615 passed 00:14:58.615 Test: bdev_set_io_timeout ...[2024-07-15 09:41:26.689725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8210:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:14:58.615 passed 00:14:58.615 Test: bdev_set_qd_sampling ...passed 00:14:58.615 Test: lba_range_overlap ...passed 00:14:58.615 Test: lock_lba_range_check_ranges ...passed 00:14:58.615 Test: lock_lba_range_with_io_outstanding ...passed 00:14:58.615 Test: lock_lba_range_overlapped ...passed 00:14:58.615 Test: bdev_quiesce ...[2024-07-15 09:41:26.698415] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10179:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:14:58.615 passed 00:14:58.615 Test: bdev_io_abort ...passed 00:14:58.615 Test: bdev_unmap ...passed 00:14:58.615 Test: bdev_write_zeroes_split_test ...passed 00:14:58.615 Test: bdev_set_options_test ...[2024-07-15 09:41:26.703111] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:14:58.615 passed 00:14:58.615 Test: bdev_get_memory_domains ...passed 00:14:58.615 Test: bdev_io_ext ...passed 00:14:58.615 Test: bdev_io_ext_no_opts ...passed 00:14:58.615 Test: bdev_io_ext_invalid_opts ...passed 00:14:58.877 Test: bdev_io_ext_split ...passed 00:14:58.877 Test: bdev_io_ext_bounce_buffer ...passed 00:14:58.877 Test: bdev_register_uuid_alias ...[2024-07-15 09:41:26.711199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 67dd2be8-428e-11ef-a0af-c98d8ee52a94 already exists 00:14:58.877 [2024-07-15 09:41:26.711263] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7748:bdev_register: *ERROR*: Unable to add uuid:67dd2be8-428e-11ef-a0af-c98d8ee52a94 alias for bdev bdev0 00:14:58.877 passed 00:14:58.877 Test: bdev_unregister_by_name ...[2024-07-15 09:41:26.711618] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8000:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:14:58.877 passed 00:14:58.877 Test: for_each_bdev_test ...[2024-07-15 09:41:26.711638] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8009:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:14:58.877 passed 00:14:58.877 Test: bdev_seek_test ...passed 00:14:58.877 Test: bdev_copy ...passed 00:14:58.877 Test: bdev_copy_split_test ...passed 00:14:58.877 Test: examine_locks ...passed 00:14:58.877 Test: claim_v2_rwo ...passed 00:14:58.877 Test: claim_v2_rom ...[2024-07-15 09:41:26.716268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8104:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716312] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8734:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716325] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716337] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716347] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8571:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716361] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:14:58.877 [2024-07-15 09:41:26.716399] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8104:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:14:58.877 passed 00:14:58.877 Test: claim_v2_rwm ...passed 00:14:58.877 Test: claim_v2_existing_writer ...passed 00:14:58.877 Test: claim_v2_existing_v1 ...[2024-07-15 09:41:26.716411] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716421] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8571:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716445] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8772:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:14:58.877 [2024-07-15 09:41:26.716456] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8768:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:14:58.877 [2024-07-15 09:41:26.716481] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8803:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:14:58.877 [2024-07-15 09:41:26.716493] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8104:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716502] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716511] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716519] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8571:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716528] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8822:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716540] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8803:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:14:58.877 [2024-07-15 09:41:26.716568] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8768:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:14:58.877 [2024-07-15 09:41:26.716579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8768:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:14:58.877 passed 00:14:58.877 Test: claim_v1_existing_v2 ...passed 00:14:58.877 Test: examine_claimed ...passed 00:14:58.877 00:14:58.877 [2024-07-15 09:41:26.716609] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716657] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:14:58.877 [2024-07-15 09:41:26.716668] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:14:58.878 [2024-07-15 09:41:26.716694] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8571:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:14:58.878 [2024-07-15 09:41:26.716708] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8571:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:14:58.878 [2024-07-15 09:41:26.716724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8571:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:14:58.878 [2024-07-15 09:41:26.716771] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8899:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:14:58.878 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.878 suites 1 1 n/a 0 0 00:14:58.878 tests 59 59 59 0 0 00:14:58.878 asserts 4599 4599 4599 0 n/a 00:14:58.878 00:14:58.878 Elapsed time = 0.070 seconds 00:14:58.878 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:14:58.878 00:14:58.878 00:14:58.878 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.878 http://cunit.sourceforge.net/ 00:14:58.878 00:14:58.878 00:14:58.878 Suite: nvme 00:14:58.878 Test: test_create_ctrlr ...passed 00:14:58.878 Test: test_reset_ctrlr ...passed 00:14:58.878 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:14:58.878 Test: test_failover_ctrlr ...passed 00:14:58.878 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-15 09:41:26.726637] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.727108] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.727143] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.727164] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 passed 00:14:58.878 Test: test_pending_reset ...[2024-07-15 09:41:26.727352] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 passed 00:14:58.878 Test: test_attach_ctrlr ...[2024-07-15 09:41:26.727399] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.727479] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:14:58.878 passed 00:14:58.878 Test: test_aer_cb ...passed 00:14:58.878 Test: test_submit_nvme_cmd ...passed 00:14:58.878 Test: test_add_remove_trid ...passed 00:14:58.878 Test: test_abort ...[2024-07-15 09:41:26.727772] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:14:58.878 passed 00:14:58.878 Test: test_get_io_qpair ...passed 00:14:58.878 Test: test_bdev_unregister ...passed 00:14:58.878 Test: test_compare_ns ...passed 00:14:58.878 Test: test_init_ana_log_page ...passed 00:14:58.878 Test: test_get_memory_domains ...passed 00:14:58.878 Test: test_reconnect_qpair ...[2024-07-15 09:41:26.728056] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 passed 00:14:58.878 Test: test_create_bdev_ctrlr ...[2024-07-15 09:41:26.728116] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:14:58.878 passed 00:14:58.878 Test: test_add_multi_ns_to_bdev ...[2024-07-15 09:41:26.728241] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:14:58.878 passed 00:14:58.878 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:14:58.878 Test: test_admin_path ...passed 00:14:58.878 Test: test_reset_bdev_ctrlr ...passed 00:14:58.878 Test: test_find_io_path ...passed 00:14:58.878 Test: test_retry_io_if_ana_state_is_updating ...passed 00:14:58.878 Test: test_retry_io_for_io_path_error ...passed 00:14:58.878 Test: test_retry_io_count ...passed 00:14:58.878 Test: test_concurrent_read_ana_log_page ...passed 00:14:58.878 Test: test_retry_io_for_ana_error ...passed 00:14:58.878 Test: test_check_io_error_resiliency_params ...[2024-07-15 09:41:26.728891] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:14:58.878 passed 00:14:58.878 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-15 09:41:26.728915] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:14:58.878 [2024-07-15 09:41:26.728927] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:14:58.878 [2024-07-15 09:41:26.728939] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:14:58.878 [2024-07-15 09:41:26.728949] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:14:58.878 [2024-07-15 09:41:26.728962] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:14:58.878 [2024-07-15 09:41:26.728975] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:14:58.878 [2024-07-15 09:41:26.728988] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:14:58.878 [2024-07-15 09:41:26.729001] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:14:58.878 passed 00:14:58.878 Test: test_reconnect_ctrlr ...[2024-07-15 09:41:26.729103] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729128] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729169] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729187] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 passed 00:14:58.878 Test: test_retry_failover_ctrlr ...passed 00:14:58.878 Test: test_fail_path ...passed 00:14:58.878 Test: test_nvme_ns_cmp ...passed 00:14:58.878 Test: test_ana_transition ...passed 00:14:58.878 Test: test_set_preferred_path ...passed 00:14:58.878 Test: test_find_next_io_path ...passed 00:14:58.878 Test: test_find_io_path_min_qd ...passed 00:14:58.878 Test: test_disable_auto_failback ...[2024-07-15 09:41:26.729204] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729254] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729314] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729334] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729350] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729365] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729380] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.729578] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 passed 00:14:58.878 Test: test_set_multipath_policy ...passed 00:14:58.878 Test: test_uuid_generation ...passed 00:14:58.878 Test: test_retry_io_to_same_path ...passed 00:14:58.878 Test: test_race_between_reset_and_disconnected ...passed 00:14:58.878 Test: test_ctrlr_op_rpc ...passed 00:14:58.878 Test: test_bdev_ctrlr_op_rpc ...passed 00:14:58.878 Test: test_disable_enable_ctrlr ...[2024-07-15 09:41:26.769024] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 [2024-07-15 09:41:26.769086] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:58.878 passed 00:14:58.878 Test: test_delete_ctrlr_done ...passed 00:14:58.878 Test: test_ns_remove_during_reset ...passed 00:14:58.878 Test: test_io_path_is_current ...passed 00:14:58.878 00:14:58.878 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.878 suites 1 1 n/a 0 0 00:14:58.878 tests 49 49 49 0 0 00:14:58.878 asserts 3577 3577 3577 0 n/a 00:14:58.878 00:14:58.878 Elapsed time = 0.008 seconds 00:14:58.878 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:14:58.878 00:14:58.878 00:14:58.878 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.878 http://cunit.sourceforge.net/ 00:14:58.878 00:14:58.878 Test Options 00:14:58.878 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:14:58.878 00:14:58.878 Suite: raid 00:14:58.878 Test: test_create_raid ...passed 00:14:58.878 Test: test_create_raid_superblock ...passed 00:14:58.878 Test: test_delete_raid ...passed 00:14:58.878 Test: test_create_raid_invalid_args ...[2024-07-15 09:41:26.778841] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:14:58.878 [2024-07-15 09:41:26.779116] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:14:58.878 [2024-07-15 09:41:26.779214] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:14:58.878 [2024-07-15 09:41:26.779250] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:14:58.878 [2024-07-15 09:41:26.779264] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:14:58.878 [2024-07-15 09:41:26.779417] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:14:58.878 [2024-07-15 09:41:26.779431] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:14:58.878 passed 00:14:58.878 Test: test_delete_raid_invalid_args ...passed 00:14:58.878 Test: test_io_channel ...passed 00:14:58.878 Test: test_reset_io ...passed 00:14:58.878 Test: test_multi_raid ...passed 00:14:58.878 Test: test_io_type_supported ...passed 00:14:58.878 Test: test_raid_json_dump_info ...passed 00:14:58.878 Test: test_context_size ...passed 00:14:58.878 Test: test_raid_level_conversions ...passed 00:14:58.878 Test: test_raid_io_split ...passed 00:14:58.878 Test: test_raid_process ...passed 00:14:58.878 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 1 1 n/a 0 0 00:14:58.879 tests 14 14 14 0 0 00:14:58.879 asserts 6183 6183 6183 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.000 seconds 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: raid_sb 00:14:58.879 Test: test_raid_bdev_write_superblock ...passed 00:14:58.879 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:14:58.879 Test: test_raid_bdev_parse_superblock ...passed 00:14:58.879 Suite: raid_sb_md 00:14:58.879 Test: test_raid_bdev_write_superblock ...passed 00:14:58.879 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:14:58.879 Test: test_raid_bdev_parse_superblock ...[2024-07-15 09:41:26.788901] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:14:58.879 passed 00:14:58.879 Suite: raid_sb_md_interleaved 00:14:58.879 Test: test_raid_bdev_write_superblock ...[2024-07-15 09:41:26.789191] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:14:58.879 passed 00:14:58.879 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:14:58.879 Test: test_raid_bdev_parse_superblock ...passed 00:14:58.879 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 3 3 n/a 0 0 00:14:58.879 tests 9 9 9 0 0 00:14:58.879 asserts 139 139 139 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.000 seconds 00:14:58.879 [2024-07-15 09:41:26.789298] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: concat 00:14:58.879 Test: test_concat_start ...passed 00:14:58.879 Test: test_concat_rw ...passed 00:14:58.879 Test: test_concat_null_payload ...passed 00:14:58.879 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 1 1 n/a 0 0 00:14:58.879 tests 3 3 3 0 0 00:14:58.879 asserts 8460 8460 8460 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.008 seconds 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: raid0 00:14:58.879 Test: test_write_io ...passed 00:14:58.879 Test: test_read_io ...passed 00:14:58.879 Test: test_unmap_io ...passed 00:14:58.879 Test: test_io_failure ...passed 00:14:58.879 Suite: raid0_dif 00:14:58.879 Test: test_write_io ...passed 00:14:58.879 Test: test_read_io ...passed 00:14:58.879 Test: test_unmap_io ...passed 00:14:58.879 Test: test_io_failure ...passed 00:14:58.879 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 2 2 n/a 0 0 00:14:58.879 tests 8 8 8 0 0 00:14:58.879 asserts 368291 368291 368291 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.016 seconds 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: raid1 00:14:58.879 Test: test_raid1_start ...passed 00:14:58.879 Test: test_raid1_read_balancing ...passed 00:14:58.879 Test: test_raid1_write_error ...passed 00:14:58.879 Test: test_raid1_read_error ...passed 00:14:58.879 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 1 1 n/a 0 0 00:14:58.879 tests 4 4 4 0 0 00:14:58.879 asserts 4374 4374 4374 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.000 seconds 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: zone 00:14:58.879 Test: test_zone_get_operation ...passed 00:14:58.879 Test: test_bdev_zone_get_info ...passed 00:14:58.879 Test: test_bdev_zone_management ...passed 00:14:58.879 Test: test_bdev_zone_append ...passed 00:14:58.879 Test: test_bdev_zone_append_with_md ...passed 00:14:58.879 Test: test_bdev_zone_appendv ...passed 00:14:58.879 Test: test_bdev_zone_appendv_with_md ...passed 00:14:58.879 Test: test_bdev_io_get_append_location ...passed 00:14:58.879 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 1 1 n/a 0 0 00:14:58.879 tests 8 8 8 0 0 00:14:58.879 asserts 94 94 94 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.000 seconds 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: gpt_parse 00:14:58.879 Test: test_parse_mbr_and_primary ...[2024-07-15 09:41:26.843631] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:14:58.879 [2024-07-15 09:41:26.844006] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:14:58.879 [2024-07-15 09:41:26.844063] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:14:58.879 [2024-07-15 09:41:26.844084] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:14:58.879 [2024-07-15 09:41:26.844108] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:14:58.879 [2024-07-15 09:41:26.844128] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:14:58.879 passed 00:14:58.879 Test: test_parse_secondary ...passed 00:14:58.879 Test: test_check_mbr ...passed 00:14:58.879 Test: test_read_header ...[2024-07-15 09:41:26.844402] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:14:58.879 [2024-07-15 09:41:26.844422] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:14:58.879 [2024-07-15 09:41:26.844443] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:14:58.879 [2024-07-15 09:41:26.844460] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:14:58.879 [2024-07-15 09:41:26.844748] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:14:58.879 [2024-07-15 09:41:26.844769] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:14:58.879 passed 00:14:58.879 Test: test_read_partitions ...[2024-07-15 09:41:26.844797] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:14:58.879 [2024-07-15 09:41:26.844818] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:14:58.879 [2024-07-15 09:41:26.844838] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:14:58.879 [2024-07-15 09:41:26.844857] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:14:58.879 [2024-07-15 09:41:26.844877] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:14:58.879 [2024-07-15 09:41:26.844895] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:14:58.879 [2024-07-15 09:41:26.844922] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:14:58.879 [2024-07-15 09:41:26.844942] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:14:58.879 [2024-07-15 09:41:26.844960] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:14:58.879 [2024-07-15 09:41:26.844978] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:14:58.879 passed 00:14:58.879 00:14:58.879 [2024-07-15 09:41:26.845106] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 1 1 n/a 0 0 00:14:58.879 tests 5 5 5 0 0 00:14:58.879 asserts 33 33 33 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.000 seconds 00:14:58.879 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:14:58.879 00:14:58.879 00:14:58.879 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.879 http://cunit.sourceforge.net/ 00:14:58.879 00:14:58.879 00:14:58.879 Suite: bdev_part 00:14:58.879 Test: part_test ...[2024-07-15 09:41:26.856916] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 0fa96b15-868d-6a5a-bcd0-df71d237c382 already exists 00:14:58.879 passed 00:14:58.879 Test: part_free_test ...passed 00:14:58.879 Test: part_get_io_channel_test ...[2024-07-15 09:41:26.857175] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7748:bdev_register: *ERROR*: Unable to add uuid:0fa96b15-868d-6a5a-bcd0-df71d237c382 alias for bdev test1 00:14:58.879 passed 00:14:58.879 Test: part_construct_ext ...passed 00:14:58.879 00:14:58.879 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.879 suites 1 1 n/a 0 0 00:14:58.879 tests 4 4 4 0 0 00:14:58.879 asserts 48 48 48 0 n/a 00:14:58.879 00:14:58.879 Elapsed time = 0.000 seconds 00:14:58.880 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:14:58.880 00:14:58.880 00:14:58.880 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.880 http://cunit.sourceforge.net/ 00:14:58.880 00:14:58.880 00:14:58.880 Suite: scsi_nvme_suite 00:14:58.880 Test: scsi_nvme_translate_test ...passed 00:14:58.880 00:14:58.880 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.880 suites 1 1 n/a 0 0 00:14:58.880 tests 1 1 1 0 0 00:14:58.880 asserts 104 104 104 0 n/a 00:14:58.880 00:14:58.880 Elapsed time = 0.000 seconds 00:14:58.880 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:14:58.880 00:14:58.880 00:14:58.880 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.880 http://cunit.sourceforge.net/ 00:14:58.880 00:14:58.880 00:14:58.880 Suite: lvol 00:14:58.880 Test: ut_lvs_init ...[2024-07-15 09:41:26.880719] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:14:58.880 passed 00:14:58.880 Test: ut_lvol_init ...[2024-07-15 09:41:26.881112] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:14:58.880 passed 00:14:58.880 Test: ut_lvol_snapshot ...passed 00:14:58.880 Test: ut_lvol_clone ...passed 00:14:58.880 Test: ut_lvs_destroy ...passed 00:14:58.880 Test: ut_lvs_unload ...passed 00:14:58.880 Test: ut_lvol_resize ...passed 00:14:58.880 Test: ut_lvol_set_read_only ...passed 00:14:58.880 Test: ut_lvol_hotremove ...passed 00:14:58.880 Test: ut_vbdev_lvol_get_io_channel ...[2024-07-15 09:41:26.881323] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:14:58.880 passed 00:14:58.880 Test: ut_vbdev_lvol_io_type_supported ...passed 00:14:58.880 Test: ut_lvol_read_write ...passed 00:14:58.880 Test: ut_vbdev_lvol_submit_request ...passed 00:14:58.880 Test: ut_lvol_examine_config ...passed 00:14:58.880 Test: ut_lvol_examine_disk ...passed 00:14:58.880 Test: ut_lvol_rename ...[2024-07-15 09:41:26.881504] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:14:58.880 [2024-07-15 09:41:26.881600] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:14:58.880 [2024-07-15 09:41:26.881627] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:14:58.880 passed 00:14:58.880 Test: ut_bdev_finish ...passed 00:14:58.880 Test: ut_lvs_rename ...passed 00:14:58.880 Test: ut_lvol_seek ...passed 00:14:58.880 Test: ut_esnap_dev_create ...[2024-07-15 09:41:26.881721] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:14:58.880 [2024-07-15 09:41:26.881748] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:14:58.880 passed 00:14:58.880 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-15 09:41:26.881774] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:14:58.880 passed 00:14:58.880 Test: ut_lvol_shallow_copy ...[2024-07-15 09:41:26.881852] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:14:58.880 [2024-07-15 09:41:26.881883] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:14:58.880 passed 00:14:58.880 Test: ut_lvol_set_external_parent ...[2024-07-15 09:41:26.881945] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:14:58.880 [2024-07-15 09:41:26.881970] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:14:58.880 [2024-07-15 09:41:26.882011] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:14:58.880 passed 00:14:58.880 00:14:58.880 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.880 suites 1 1 n/a 0 0 00:14:58.880 tests 23 23 23 0 0 00:14:58.880 asserts 770 770 770 0 n/a 00:14:58.880 00:14:58.880 Elapsed time = 0.008 seconds 00:14:58.880 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:14:58.880 00:14:58.880 00:14:58.880 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.880 http://cunit.sourceforge.net/ 00:14:58.880 00:14:58.880 00:14:58.880 Suite: zone_block 00:14:58.880 Test: test_zone_block_create ...passed 00:14:58.880 Test: test_zone_block_create_invalid ...[2024-07-15 09:41:26.899580] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:14:58.880 [2024-07-15 09:41:26.899826] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 09:41:26.899843] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:14:58.880 [2024-07-15 09:41:26.899853] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 09:41:26.899863] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:14:58.880 [2024-07-15 09:41:26.899872] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:14:58.880 Test: test_get_zone_info ...[2024-07-15 09:41:26.899880] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:14:58.880 [2024-07-15 09:41:26.899888] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:14:58.880 Test: test_supported_io_types ...passed 00:14:58.880 Test: test_reset_zone ...passed 00:14:58.880 Test: test_open_zone ...[2024-07-15 09:41:26.899953] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.899974] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.899988] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900041] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900055] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900095] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 passed 00:14:58.880 Test: test_zone_write ...[2024-07-15 09:41:26.900282] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900296] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900334] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:14:58.880 [2024-07-15 09:41:26.900345] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900358] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:14:58.880 [2024-07-15 09:41:26.900369] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900863] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:14:58.880 [2024-07-15 09:41:26.900888] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.900899] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:14:58.880 [2024-07-15 09:41:26.900907] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901387] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:14:58.880 [2024-07-15 09:41:26.901406] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 passed 00:14:58.880 Test: test_zone_read ...passed 00:14:58.880 Test: test_close_zone ...passed 00:14:58.880 Test: test_finish_zone ...[2024-07-15 09:41:26.901457] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:14:58.880 [2024-07-15 09:41:26.901470] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901484] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:14:58.880 [2024-07-15 09:41:26.901496] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901541] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:14:58.880 [2024-07-15 09:41:26.901552] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901580] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901630] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901642] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 passed 00:14:58.880 Test: test_append_zone ...[2024-07-15 09:41:26.901694] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901708] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901738] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:14:58.880 [2024-07-15 09:41:26.901748] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.880 [2024-07-15 09:41:26.901761] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:14:58.881 [2024-07-15 09:41:26.901772] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.881 [2024-07-15 09:41:26.902721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:14:58.881 [2024-07-15 09:41:26.902746] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:14:58.881 passed 00:14:58.881 00:14:58.881 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.881 suites 1 1 n/a 0 0 00:14:58.881 tests 11 11 11 0 0 00:14:58.881 asserts 3437 3437 3437 0 n/a 00:14:58.881 00:14:58.881 Elapsed time = 0.008 seconds 00:14:58.881 09:41:26 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:14:58.881 00:14:58.881 00:14:58.881 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.881 http://cunit.sourceforge.net/ 00:14:58.881 00:14:58.881 00:14:58.881 Suite: bdev 00:14:58.881 Test: basic ...[2024-07-15 09:41:26.913157] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b4a9): Operation not permitted (rc=-1) 00:14:58.881 [2024-07-15 09:41:26.913483] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x2a1e41e6a480 (0x24b4a0): Operation not permitted (rc=-1) 00:14:58.881 [2024-07-15 09:41:26.913501] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b4a9): Operation not permitted (rc=-1) 00:14:58.881 passed 00:14:58.881 Test: unregister_and_close ...passed 00:14:58.881 Test: unregister_and_close_different_threads ...passed 00:14:58.881 Test: basic_qos ...passed 00:14:58.881 Test: put_channel_during_reset ...passed 00:14:58.881 Test: aborted_reset ...passed 00:14:58.881 Test: aborted_reset_no_outstanding_io ...passed 00:14:58.881 Test: io_during_reset ...passed 00:14:58.881 Test: reset_completions ...passed 00:14:58.881 Test: io_during_qos_queue ...passed 00:14:58.881 Test: io_during_qos_reset ...passed 00:14:58.881 Test: enomem ...passed 00:14:58.881 Test: enomem_multi_bdev ...passed 00:14:58.881 Test: enomem_multi_bdev_unregister ...passed 00:14:58.881 Test: enomem_multi_io_target ...passed 00:14:58.881 Test: qos_dynamic_enable ...passed 00:14:58.881 Test: bdev_histograms_mt ...passed 00:14:58.881 Test: bdev_set_io_timeout_mt ...[2024-07-15 09:41:26.960179] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x2a1e41e6a600 not unregistered 00:14:58.881 passed 00:14:58.881 Test: lock_lba_range_then_submit_io ...[2024-07-15 09:41:26.961753] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b488 already registered (old:0x2a1e41e6a600 new:0x2a1e41e6a780) 00:14:58.881 passed 00:14:58.881 Test: unregister_during_reset ...passed 00:14:58.881 Test: event_notify_and_close ...passed 00:14:59.140 Test: unregister_and_qos_poller ...passed 00:14:59.140 Suite: bdev_wrong_thread 00:14:59.140 Test: spdk_bdev_register_wt ...[2024-07-15 09:41:26.971398] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8529:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x2a1e41e33380 (0x2a1e41e33380) 00:14:59.140 passed 00:14:59.140 Test: spdk_bdev_examine_wt ...passed 00:14:59.140 00:14:59.140 [2024-07-15 09:41:26.971508] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x2a1e41e33380 (0x2a1e41e33380) 00:14:59.140 Run Summary: Type Total Ran Passed Failed Inactive 00:14:59.140 suites 2 2 n/a 0 0 00:14:59.140 tests 24 24 24 0 0 00:14:59.140 asserts 621 621 621 0 n/a 00:14:59.140 00:14:59.140 Elapsed time = 0.055 seconds 00:14:59.140 00:14:59.140 real 0m0.334s 00:14:59.140 user 0m0.181s 00:14:59.140 sys 0m0.125s 00:14:59.140 09:41:26 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.140 09:41:26 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:59.140 ************************************ 00:14:59.140 END TEST unittest_bdev 00:14:59.140 ************************************ 00:14:59.140 09:41:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:14:59.140 09:41:27 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:14:59.140 09:41:27 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:14:59.140 09:41:27 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:14:59.140 09:41:27 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:14:59.140 09:41:27 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:14:59.140 09:41:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:59.140 09:41:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.140 09:41:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:14:59.140 ************************************ 00:14:59.140 START TEST unittest_blob_blobfs 00:14:59.140 ************************************ 00:14:59.140 09:41:27 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:14:59.140 09:41:27 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:14:59.140 09:41:27 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:14:59.140 00:14:59.140 00:14:59.140 CUnit - A unit testing framework for C - Version 2.1-3 00:14:59.140 http://cunit.sourceforge.net/ 00:14:59.140 00:14:59.140 00:14:59.140 Suite: blob_nocopy_noextent 00:14:59.140 Test: blob_init ...[2024-07-15 09:41:27.043165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:14:59.140 passed 00:14:59.140 Test: blob_thin_provision ...passed 00:14:59.140 Test: blob_read_only ...passed 00:14:59.140 Test: bs_load ...[2024-07-15 09:41:27.157687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:14:59.140 passed 00:14:59.140 Test: bs_load_custom_cluster_size ...passed 00:14:59.140 Test: bs_load_after_failed_grow ...passed 00:14:59.140 Test: bs_cluster_sz ...[2024-07-15 09:41:27.196768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:14:59.140 [2024-07-15 09:41:27.196849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:14:59.140 [2024-07-15 09:41:27.196861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:14:59.140 passed 00:14:59.400 Test: bs_resize_md ...passed 00:14:59.400 Test: bs_destroy ...passed 00:14:59.400 Test: bs_type ...passed 00:14:59.400 Test: bs_super_block ...passed 00:14:59.400 Test: bs_test_recover_cluster_count ...passed 00:14:59.400 Test: bs_grow_live ...passed 00:14:59.400 Test: bs_grow_live_no_space ...passed 00:14:59.400 Test: bs_test_grow ...passed 00:14:59.400 Test: blob_serialize_test ...passed 00:14:59.400 Test: super_block_crc ...passed 00:14:59.400 Test: blob_thin_prov_write_count_io ...passed 00:14:59.400 Test: blob_thin_prov_unmap_cluster ...passed 00:14:59.400 Test: bs_load_iter_test ...passed 00:14:59.659 Test: blob_relations ...[2024-07-15 09:41:27.497726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:14:59.659 [2024-07-15 09:41:27.497854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.659 [2024-07-15 09:41:27.497985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:14:59.659 [2024-07-15 09:41:27.497995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.659 passed 00:14:59.659 Test: blob_relations2 ...[2024-07-15 09:41:27.520930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:14:59.659 [2024-07-15 09:41:27.521004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.659 [2024-07-15 09:41:27.521015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:14:59.659 [2024-07-15 09:41:27.521021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.659 [2024-07-15 09:41:27.521160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:14:59.659 [2024-07-15 09:41:27.521169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.659 [2024-07-15 09:41:27.521201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:14:59.659 [2024-07-15 09:41:27.521209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.659 passed 00:14:59.659 Test: blob_relations3 ...passed 00:14:59.918 Test: blobstore_clean_power_failure ...passed 00:14:59.918 Test: blob_delete_snapshot_power_failure ...[2024-07-15 09:41:27.806563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:14:59.918 [2024-07-15 09:41:27.826483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:14:59.918 [2024-07-15 09:41:27.826570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:14:59.918 [2024-07-15 09:41:27.826579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.918 [2024-07-15 09:41:27.846399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:14:59.918 [2024-07-15 09:41:27.846479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:14:59.918 [2024-07-15 09:41:27.846488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:14:59.918 [2024-07-15 09:41:27.846496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.918 [2024-07-15 09:41:27.866199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:14:59.918 [2024-07-15 09:41:27.866273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.918 [2024-07-15 09:41:27.885600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:14:59.918 [2024-07-15 09:41:27.885674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.918 [2024-07-15 09:41:27.904732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:14:59.918 [2024-07-15 09:41:27.904802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:14:59.918 passed 00:14:59.918 Test: blob_create_snapshot_power_failure ...[2024-07-15 09:41:27.963505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:14:59.918 [2024-07-15 09:41:28.002296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:00.178 [2024-07-15 09:41:28.021854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:00.178 passed 00:15:00.178 Test: blob_io_unit ...passed 00:15:00.178 Test: blob_io_unit_compatibility ...passed 00:15:00.178 Test: blob_ext_md_pages ...passed 00:15:00.178 Test: blob_esnap_io_4096_4096 ...passed 00:15:00.178 Test: blob_esnap_io_512_512 ...passed 00:15:00.178 Test: blob_esnap_io_4096_512 ...passed 00:15:00.437 Test: blob_esnap_io_512_4096 ...passed 00:15:00.437 Test: blob_esnap_clone_resize ...passed 00:15:00.437 Suite: blob_bs_nocopy_noextent 00:15:00.437 Test: blob_open ...passed 00:15:00.437 Test: blob_create ...[2024-07-15 09:41:28.434373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:00.437 passed 00:15:00.437 Test: blob_create_loop ...passed 00:15:00.696 Test: blob_create_fail ...[2024-07-15 09:41:28.566572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:00.696 passed 00:15:00.696 Test: blob_create_internal ...passed 00:15:00.696 Test: blob_create_zero_extent ...passed 00:15:00.696 Test: blob_snapshot ...passed 00:15:00.955 Test: blob_clone ...passed 00:15:00.955 Test: blob_inflate ...[2024-07-15 09:41:28.879444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:00.955 passed 00:15:00.955 Test: blob_delete ...passed 00:15:00.955 Test: blob_resize_test ...[2024-07-15 09:41:28.998462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:00.955 passed 00:15:01.214 Test: blob_resize_thin_test ...passed 00:15:01.214 Test: channel_ops ...passed 00:15:01.214 Test: blob_super ...passed 00:15:01.214 Test: blob_rw_verify_iov ...passed 00:15:01.473 Test: blob_unmap ...passed 00:15:01.473 Test: blob_iter ...passed 00:15:01.473 Test: blob_parse_md ...passed 00:15:01.473 Test: bs_load_pending_removal ...passed 00:15:01.473 Test: bs_unload ...[2024-07-15 09:41:29.530182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:01.473 passed 00:15:01.733 Test: bs_usable_clusters ...passed 00:15:01.733 Test: blob_crc ...[2024-07-15 09:41:29.646399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:01.733 [2024-07-15 09:41:29.646482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:01.733 passed 00:15:01.733 Test: blob_flags ...passed 00:15:01.733 Test: bs_version ...passed 00:15:01.733 Test: blob_set_xattrs_test ...[2024-07-15 09:41:29.824826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:01.733 [2024-07-15 09:41:29.824915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:01.992 passed 00:15:01.993 Test: blob_thin_prov_alloc ...passed 00:15:01.993 Test: blob_insert_cluster_msg_test ...passed 00:15:01.993 Test: blob_thin_prov_rw ...passed 00:15:02.252 Test: blob_thin_prov_rle ...passed 00:15:02.252 Test: blob_thin_prov_rw_iov ...passed 00:15:02.252 Test: blob_snapshot_rw ...passed 00:15:02.252 Test: blob_snapshot_rw_iov ...passed 00:15:02.512 Test: blob_inflate_rw ...passed 00:15:02.512 Test: blob_snapshot_freeze_io ...passed 00:15:02.512 Test: blob_operation_split_rw ...passed 00:15:02.771 Test: blob_operation_split_rw_iov ...passed 00:15:02.771 Test: blob_simultaneous_operations ...[2024-07-15 09:41:30.693625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:02.771 [2024-07-15 09:41:30.693720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:02.771 [2024-07-15 09:41:30.694310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:02.771 [2024-07-15 09:41:30.694327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:02.771 [2024-07-15 09:41:30.699590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:02.771 [2024-07-15 09:41:30.699642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:02.771 [2024-07-15 09:41:30.699666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:02.771 [2024-07-15 09:41:30.699673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:02.771 passed 00:15:02.771 Test: blob_persist_test ...passed 00:15:03.030 Test: blob_decouple_snapshot ...passed 00:15:03.030 Test: blob_seek_io_unit ...passed 00:15:03.030 Test: blob_nested_freezes ...passed 00:15:03.030 Test: blob_clone_resize ...passed 00:15:03.030 Test: blob_shallow_copy ...[2024-07-15 09:41:31.099628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:03.030 [2024-07-15 09:41:31.099727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:03.030 [2024-07-15 09:41:31.099737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:03.030 passed 00:15:03.030 Suite: blob_blob_nocopy_noextent 00:15:03.289 Test: blob_write ...passed 00:15:03.289 Test: blob_read ...passed 00:15:03.289 Test: blob_rw_verify ...passed 00:15:03.289 Test: blob_rw_verify_iov_nomem ...passed 00:15:03.548 Test: blob_rw_iov_read_only ...passed 00:15:03.548 Test: blob_xattr ...passed 00:15:03.548 Test: blob_dirty_shutdown ...passed 00:15:03.548 Test: blob_is_degraded ...passed 00:15:03.548 Suite: blob_esnap_bs_nocopy_noextent 00:15:03.808 Test: blob_esnap_create ...passed 00:15:03.808 Test: blob_esnap_thread_add_remove ...passed 00:15:03.808 Test: blob_esnap_clone_snapshot ...passed 00:15:03.808 Test: blob_esnap_clone_inflate ...passed 00:15:03.808 Test: blob_esnap_clone_decouple ...passed 00:15:04.066 Test: blob_esnap_clone_reload ...passed 00:15:04.066 Test: blob_esnap_hotplug ...passed 00:15:04.066 Test: blob_set_parent ...[2024-07-15 09:41:32.026714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:04.066 [2024-07-15 09:41:32.026802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:04.066 [2024-07-15 09:41:32.026822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:04.066 [2024-07-15 09:41:32.026830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:04.066 [2024-07-15 09:41:32.026878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:04.066 passed 00:15:04.066 Test: blob_set_external_parent ...[2024-07-15 09:41:32.082441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:04.066 [2024-07-15 09:41:32.082508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:04.066 [2024-07-15 09:41:32.082515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:04.066 [2024-07-15 09:41:32.082551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:04.066 passed 00:15:04.066 Suite: blob_nocopy_extent 00:15:04.066 Test: blob_init ...[2024-07-15 09:41:32.101100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:04.066 passed 00:15:04.066 Test: blob_thin_provision ...passed 00:15:04.066 Test: blob_read_only ...passed 00:15:04.326 Test: bs_load ...[2024-07-15 09:41:32.173995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:04.326 passed 00:15:04.326 Test: bs_load_custom_cluster_size ...passed 00:15:04.326 Test: bs_load_after_failed_grow ...passed 00:15:04.326 Test: bs_cluster_sz ...[2024-07-15 09:41:32.211091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:04.326 [2024-07-15 09:41:32.211154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:04.326 [2024-07-15 09:41:32.211164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:04.326 passed 00:15:04.326 Test: bs_resize_md ...passed 00:15:04.326 Test: bs_destroy ...passed 00:15:04.326 Test: bs_type ...passed 00:15:04.326 Test: bs_super_block ...passed 00:15:04.326 Test: bs_test_recover_cluster_count ...passed 00:15:04.326 Test: bs_grow_live ...passed 00:15:04.326 Test: bs_grow_live_no_space ...passed 00:15:04.326 Test: bs_test_grow ...passed 00:15:04.326 Test: blob_serialize_test ...passed 00:15:04.326 Test: super_block_crc ...passed 00:15:04.326 Test: blob_thin_prov_write_count_io ...passed 00:15:04.585 Test: blob_thin_prov_unmap_cluster ...passed 00:15:04.585 Test: bs_load_iter_test ...passed 00:15:04.585 Test: blob_relations ...[2024-07-15 09:41:32.491899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:04.585 [2024-07-15 09:41:32.491986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.585 [2024-07-15 09:41:32.492069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:04.585 [2024-07-15 09:41:32.492075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.585 passed 00:15:04.585 Test: blob_relations2 ...[2024-07-15 09:41:32.512959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:04.585 [2024-07-15 09:41:32.513002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.585 [2024-07-15 09:41:32.513010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:04.585 [2024-07-15 09:41:32.513015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.585 [2024-07-15 09:41:32.513116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:04.585 [2024-07-15 09:41:32.513124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.586 [2024-07-15 09:41:32.513152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:04.586 [2024-07-15 09:41:32.513158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.586 passed 00:15:04.586 Test: blob_relations3 ...passed 00:15:04.844 Test: blobstore_clean_power_failure ...passed 00:15:04.844 Test: blob_delete_snapshot_power_failure ...[2024-07-15 09:41:32.782420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:04.844 [2024-07-15 09:41:32.801167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:04.844 [2024-07-15 09:41:32.820133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:04.844 [2024-07-15 09:41:32.820196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:04.844 [2024-07-15 09:41:32.820203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.844 [2024-07-15 09:41:32.838708] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:04.844 [2024-07-15 09:41:32.838755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:04.844 [2024-07-15 09:41:32.838763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:04.844 [2024-07-15 09:41:32.838770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.844 [2024-07-15 09:41:32.857361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:04.844 [2024-07-15 09:41:32.857401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:04.844 [2024-07-15 09:41:32.857408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:04.844 [2024-07-15 09:41:32.857414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.844 [2024-07-15 09:41:32.876260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:04.844 [2024-07-15 09:41:32.876315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.844 [2024-07-15 09:41:32.895188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:04.844 [2024-07-15 09:41:32.895250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:04.844 [2024-07-15 09:41:32.913944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:04.844 [2024-07-15 09:41:32.914002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:05.104 passed 00:15:05.104 Test: blob_create_snapshot_power_failure ...[2024-07-15 09:41:32.970486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:05.104 [2024-07-15 09:41:32.989298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:05.104 [2024-07-15 09:41:33.026223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:05.104 [2024-07-15 09:41:33.045664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:05.104 passed 00:15:05.104 Test: blob_io_unit ...passed 00:15:05.104 Test: blob_io_unit_compatibility ...passed 00:15:05.104 Test: blob_ext_md_pages ...passed 00:15:05.104 Test: blob_esnap_io_4096_4096 ...passed 00:15:05.362 Test: blob_esnap_io_512_512 ...passed 00:15:05.362 Test: blob_esnap_io_4096_512 ...passed 00:15:05.362 Test: blob_esnap_io_512_4096 ...passed 00:15:05.362 Test: blob_esnap_clone_resize ...passed 00:15:05.362 Suite: blob_bs_nocopy_extent 00:15:05.362 Test: blob_open ...passed 00:15:05.362 Test: blob_create ...[2024-07-15 09:41:33.441815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:05.620 passed 00:15:05.620 Test: blob_create_loop ...passed 00:15:05.620 Test: blob_create_fail ...[2024-07-15 09:41:33.565767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:05.620 passed 00:15:05.620 Test: blob_create_internal ...passed 00:15:05.620 Test: blob_create_zero_extent ...passed 00:15:05.897 Test: blob_snapshot ...passed 00:15:05.897 Test: blob_clone ...passed 00:15:05.897 Test: blob_inflate ...[2024-07-15 09:41:33.866572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:05.897 passed 00:15:05.897 Test: blob_delete ...passed 00:15:05.897 Test: blob_resize_test ...[2024-07-15 09:41:33.982051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:06.235 passed 00:15:06.235 Test: blob_resize_thin_test ...passed 00:15:06.235 Test: channel_ops ...passed 00:15:06.235 Test: blob_super ...passed 00:15:06.235 Test: blob_rw_verify_iov ...passed 00:15:06.235 Test: blob_unmap ...passed 00:15:06.543 Test: blob_iter ...passed 00:15:06.543 Test: blob_parse_md ...passed 00:15:06.543 Test: bs_load_pending_removal ...passed 00:15:06.543 Test: bs_unload ...[2024-07-15 09:41:34.499787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:06.543 passed 00:15:06.543 Test: bs_usable_clusters ...passed 00:15:06.543 Test: blob_crc ...[2024-07-15 09:41:34.610812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:06.543 [2024-07-15 09:41:34.610902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:06.543 passed 00:15:06.820 Test: blob_flags ...passed 00:15:06.820 Test: bs_version ...passed 00:15:06.820 Test: blob_set_xattrs_test ...[2024-07-15 09:41:34.782225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:06.820 [2024-07-15 09:41:34.782330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:06.820 passed 00:15:06.820 Test: blob_thin_prov_alloc ...passed 00:15:07.099 Test: blob_insert_cluster_msg_test ...passed 00:15:07.099 Test: blob_thin_prov_rw ...passed 00:15:07.099 Test: blob_thin_prov_rle ...passed 00:15:07.099 Test: blob_thin_prov_rw_iov ...passed 00:15:07.099 Test: blob_snapshot_rw ...passed 00:15:07.358 Test: blob_snapshot_rw_iov ...passed 00:15:07.358 Test: blob_inflate_rw ...passed 00:15:07.358 Test: blob_snapshot_freeze_io ...passed 00:15:07.617 Test: blob_operation_split_rw ...passed 00:15:07.617 Test: blob_operation_split_rw_iov ...passed 00:15:07.617 Test: blob_simultaneous_operations ...[2024-07-15 09:41:35.622613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:07.617 [2024-07-15 09:41:35.622709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:07.617 [2024-07-15 09:41:35.623253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:07.617 [2024-07-15 09:41:35.623280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:07.617 [2024-07-15 09:41:35.628459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:07.617 [2024-07-15 09:41:35.628507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:07.617 [2024-07-15 09:41:35.628530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:07.617 [2024-07-15 09:41:35.628538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:07.617 passed 00:15:07.876 Test: blob_persist_test ...passed 00:15:07.876 Test: blob_decouple_snapshot ...passed 00:15:07.876 Test: blob_seek_io_unit ...passed 00:15:07.876 Test: blob_nested_freezes ...passed 00:15:08.133 Test: blob_clone_resize ...passed 00:15:08.133 Test: blob_shallow_copy ...[2024-07-15 09:41:36.032539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:08.133 [2024-07-15 09:41:36.032629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:08.133 [2024-07-15 09:41:36.032640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:08.133 passed 00:15:08.133 Suite: blob_blob_nocopy_extent 00:15:08.133 Test: blob_write ...passed 00:15:08.133 Test: blob_read ...passed 00:15:08.392 Test: blob_rw_verify ...passed 00:15:08.392 Test: blob_rw_verify_iov_nomem ...passed 00:15:08.392 Test: blob_rw_iov_read_only ...passed 00:15:08.392 Test: blob_xattr ...passed 00:15:08.392 Test: blob_dirty_shutdown ...passed 00:15:08.651 Test: blob_is_degraded ...passed 00:15:08.651 Suite: blob_esnap_bs_nocopy_extent 00:15:08.651 Test: blob_esnap_create ...passed 00:15:08.651 Test: blob_esnap_thread_add_remove ...passed 00:15:08.651 Test: blob_esnap_clone_snapshot ...passed 00:15:08.909 Test: blob_esnap_clone_inflate ...passed 00:15:08.909 Test: blob_esnap_clone_decouple ...passed 00:15:08.909 Test: blob_esnap_clone_reload ...passed 00:15:08.909 Test: blob_esnap_hotplug ...passed 00:15:08.909 Test: blob_set_parent ...[2024-07-15 09:41:36.985054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:08.909 [2024-07-15 09:41:36.985167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:08.909 [2024-07-15 09:41:36.985189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:08.909 [2024-07-15 09:41:36.985197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:08.909 [2024-07-15 09:41:36.985247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:09.167 passed 00:15:09.167 Test: blob_set_external_parent ...[2024-07-15 09:41:37.043826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:09.167 [2024-07-15 09:41:37.043896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:09.167 [2024-07-15 09:41:37.043904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:09.167 [2024-07-15 09:41:37.043945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:09.167 passed 00:15:09.167 Suite: blob_copy_noextent 00:15:09.167 Test: blob_init ...[2024-07-15 09:41:37.063167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:09.168 passed 00:15:09.168 Test: blob_thin_provision ...passed 00:15:09.168 Test: blob_read_only ...passed 00:15:09.168 Test: bs_load ...[2024-07-15 09:41:37.140466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:09.168 passed 00:15:09.168 Test: bs_load_custom_cluster_size ...passed 00:15:09.168 Test: bs_load_after_failed_grow ...passed 00:15:09.168 Test: bs_cluster_sz ...[2024-07-15 09:41:37.179137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:09.168 [2024-07-15 09:41:37.179210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:09.168 [2024-07-15 09:41:37.179223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:09.168 passed 00:15:09.168 Test: bs_resize_md ...passed 00:15:09.168 Test: bs_destroy ...passed 00:15:09.426 Test: bs_type ...passed 00:15:09.426 Test: bs_super_block ...passed 00:15:09.426 Test: bs_test_recover_cluster_count ...passed 00:15:09.426 Test: bs_grow_live ...passed 00:15:09.426 Test: bs_grow_live_no_space ...passed 00:15:09.426 Test: bs_test_grow ...passed 00:15:09.426 Test: blob_serialize_test ...passed 00:15:09.426 Test: super_block_crc ...passed 00:15:09.426 Test: blob_thin_prov_write_count_io ...passed 00:15:09.426 Test: blob_thin_prov_unmap_cluster ...passed 00:15:09.426 Test: bs_load_iter_test ...passed 00:15:09.426 Test: blob_relations ...[2024-07-15 09:41:37.473902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:09.426 [2024-07-15 09:41:37.473995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.426 [2024-07-15 09:41:37.474070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:09.426 [2024-07-15 09:41:37.474077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.426 passed 00:15:09.426 Test: blob_relations2 ...[2024-07-15 09:41:37.496211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:09.426 [2024-07-15 09:41:37.496279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.426 [2024-07-15 09:41:37.496288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:09.426 [2024-07-15 09:41:37.496294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.426 [2024-07-15 09:41:37.496396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:09.426 [2024-07-15 09:41:37.496403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.426 [2024-07-15 09:41:37.496429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:09.426 [2024-07-15 09:41:37.496435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.426 passed 00:15:09.685 Test: blob_relations3 ...passed 00:15:09.685 Test: blobstore_clean_power_failure ...passed 00:15:09.685 Test: blob_delete_snapshot_power_failure ...[2024-07-15 09:41:37.775868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:09.943 [2024-07-15 09:41:37.795982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:09.943 [2024-07-15 09:41:37.796065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:09.943 [2024-07-15 09:41:37.796075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.943 [2024-07-15 09:41:37.816613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:09.943 [2024-07-15 09:41:37.816684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:09.943 [2024-07-15 09:41:37.816693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:09.943 [2024-07-15 09:41:37.816700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.943 [2024-07-15 09:41:37.836695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:09.943 [2024-07-15 09:41:37.836763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.943 [2024-07-15 09:41:37.856892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:09.943 [2024-07-15 09:41:37.856964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.943 [2024-07-15 09:41:37.876425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:09.943 [2024-07-15 09:41:37.876482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:09.943 passed 00:15:09.943 Test: blob_create_snapshot_power_failure ...[2024-07-15 09:41:37.937191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:09.943 [2024-07-15 09:41:37.979171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:09.943 [2024-07-15 09:41:37.999180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:10.200 passed 00:15:10.200 Test: blob_io_unit ...passed 00:15:10.200 Test: blob_io_unit_compatibility ...passed 00:15:10.200 Test: blob_ext_md_pages ...passed 00:15:10.200 Test: blob_esnap_io_4096_4096 ...passed 00:15:10.200 Test: blob_esnap_io_512_512 ...passed 00:15:10.200 Test: blob_esnap_io_4096_512 ...passed 00:15:10.200 Test: blob_esnap_io_512_4096 ...passed 00:15:10.456 Test: blob_esnap_clone_resize ...passed 00:15:10.456 Suite: blob_bs_copy_noextent 00:15:10.456 Test: blob_open ...passed 00:15:10.456 Test: blob_create ...[2024-07-15 09:41:38.414516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:10.456 passed 00:15:10.456 Test: blob_create_loop ...passed 00:15:10.456 Test: blob_create_fail ...[2024-07-15 09:41:38.540728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:10.714 passed 00:15:10.714 Test: blob_create_internal ...passed 00:15:10.714 Test: blob_create_zero_extent ...passed 00:15:10.714 Test: blob_snapshot ...passed 00:15:10.714 Test: blob_clone ...passed 00:15:10.971 Test: blob_inflate ...[2024-07-15 09:41:38.826936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:10.971 passed 00:15:10.971 Test: blob_delete ...passed 00:15:10.971 Test: blob_resize_test ...[2024-07-15 09:41:38.938180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:10.971 passed 00:15:10.971 Test: blob_resize_thin_test ...passed 00:15:11.229 Test: channel_ops ...passed 00:15:11.229 Test: blob_super ...passed 00:15:11.229 Test: blob_rw_verify_iov ...passed 00:15:11.229 Test: blob_unmap ...passed 00:15:11.229 Test: blob_iter ...passed 00:15:11.487 Test: blob_parse_md ...passed 00:15:11.487 Test: bs_load_pending_removal ...passed 00:15:11.487 Test: bs_unload ...[2024-07-15 09:41:39.436598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:11.487 passed 00:15:11.487 Test: bs_usable_clusters ...passed 00:15:11.487 Test: blob_crc ...[2024-07-15 09:41:39.546646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:11.487 [2024-07-15 09:41:39.546710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:11.487 passed 00:15:11.744 Test: blob_flags ...passed 00:15:11.744 Test: bs_version ...passed 00:15:11.744 Test: blob_set_xattrs_test ...[2024-07-15 09:41:39.709865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:11.744 [2024-07-15 09:41:39.709941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:11.744 passed 00:15:11.744 Test: blob_thin_prov_alloc ...passed 00:15:12.002 Test: blob_insert_cluster_msg_test ...passed 00:15:12.002 Test: blob_thin_prov_rw ...passed 00:15:12.002 Test: blob_thin_prov_rle ...passed 00:15:12.002 Test: blob_thin_prov_rw_iov ...passed 00:15:12.002 Test: blob_snapshot_rw ...passed 00:15:12.261 Test: blob_snapshot_rw_iov ...passed 00:15:12.261 Test: blob_inflate_rw ...passed 00:15:12.261 Test: blob_snapshot_freeze_io ...passed 00:15:12.519 Test: blob_operation_split_rw ...passed 00:15:12.519 Test: blob_operation_split_rw_iov ...passed 00:15:12.519 Test: blob_simultaneous_operations ...[2024-07-15 09:41:40.478565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:12.519 [2024-07-15 09:41:40.478654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:12.519 [2024-07-15 09:41:40.479200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:12.519 [2024-07-15 09:41:40.479217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:12.519 [2024-07-15 09:41:40.483115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:12.519 [2024-07-15 09:41:40.483145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:12.519 [2024-07-15 09:41:40.483166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:12.519 [2024-07-15 09:41:40.483172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:12.519 passed 00:15:12.519 Test: blob_persist_test ...passed 00:15:12.777 Test: blob_decouple_snapshot ...passed 00:15:12.777 Test: blob_seek_io_unit ...passed 00:15:12.777 Test: blob_nested_freezes ...passed 00:15:12.777 Test: blob_clone_resize ...passed 00:15:12.777 Test: blob_shallow_copy ...[2024-07-15 09:41:40.836668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:12.777 [2024-07-15 09:41:40.836753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:12.777 [2024-07-15 09:41:40.836763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:12.777 passed 00:15:12.777 Suite: blob_blob_copy_noextent 00:15:13.037 Test: blob_write ...passed 00:15:13.037 Test: blob_read ...passed 00:15:13.037 Test: blob_rw_verify ...passed 00:15:13.037 Test: blob_rw_verify_iov_nomem ...passed 00:15:13.037 Test: blob_rw_iov_read_only ...passed 00:15:13.295 Test: blob_xattr ...passed 00:15:13.295 Test: blob_dirty_shutdown ...passed 00:15:13.295 Test: blob_is_degraded ...passed 00:15:13.295 Suite: blob_esnap_bs_copy_noextent 00:15:13.295 Test: blob_esnap_create ...passed 00:15:13.554 Test: blob_esnap_thread_add_remove ...passed 00:15:13.554 Test: blob_esnap_clone_snapshot ...passed 00:15:13.554 Test: blob_esnap_clone_inflate ...passed 00:15:13.554 Test: blob_esnap_clone_decouple ...passed 00:15:13.554 Test: blob_esnap_clone_reload ...passed 00:15:13.813 Test: blob_esnap_hotplug ...passed 00:15:13.813 Test: blob_set_parent ...[2024-07-15 09:41:41.693877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:13.813 [2024-07-15 09:41:41.693956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:13.813 [2024-07-15 09:41:41.693977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:13.813 [2024-07-15 09:41:41.693986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:13.813 [2024-07-15 09:41:41.694034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:13.813 passed 00:15:13.813 Test: blob_set_external_parent ...[2024-07-15 09:41:41.747501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:13.813 [2024-07-15 09:41:41.747562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:13.813 [2024-07-15 09:41:41.747569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:13.813 [2024-07-15 09:41:41.747613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:13.813 passed 00:15:13.813 Suite: blob_copy_extent 00:15:13.813 Test: blob_init ...[2024-07-15 09:41:41.765244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:13.813 passed 00:15:13.813 Test: blob_thin_provision ...passed 00:15:13.813 Test: blob_read_only ...passed 00:15:13.813 Test: bs_load ...[2024-07-15 09:41:41.836459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:13.813 passed 00:15:13.813 Test: bs_load_custom_cluster_size ...passed 00:15:13.813 Test: bs_load_after_failed_grow ...passed 00:15:13.813 Test: bs_cluster_sz ...[2024-07-15 09:41:41.872575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:13.813 [2024-07-15 09:41:41.872645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:13.813 [2024-07-15 09:41:41.872655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:13.813 passed 00:15:14.073 Test: bs_resize_md ...passed 00:15:14.073 Test: bs_destroy ...passed 00:15:14.073 Test: bs_type ...passed 00:15:14.073 Test: bs_super_block ...passed 00:15:14.073 Test: bs_test_recover_cluster_count ...passed 00:15:14.073 Test: bs_grow_live ...passed 00:15:14.073 Test: bs_grow_live_no_space ...passed 00:15:14.073 Test: bs_test_grow ...passed 00:15:14.073 Test: blob_serialize_test ...passed 00:15:14.073 Test: super_block_crc ...passed 00:15:14.073 Test: blob_thin_prov_write_count_io ...passed 00:15:14.073 Test: blob_thin_prov_unmap_cluster ...passed 00:15:14.073 Test: bs_load_iter_test ...passed 00:15:14.073 Test: blob_relations ...[2024-07-15 09:41:42.151819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:14.073 [2024-07-15 09:41:42.151904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.073 [2024-07-15 09:41:42.152035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:14.073 [2024-07-15 09:41:42.152044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.073 passed 00:15:14.332 Test: blob_relations2 ...[2024-07-15 09:41:42.173125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:14.332 [2024-07-15 09:41:42.173184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.332 [2024-07-15 09:41:42.173192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:14.332 [2024-07-15 09:41:42.173198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.332 [2024-07-15 09:41:42.173310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:14.332 [2024-07-15 09:41:42.173318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.332 [2024-07-15 09:41:42.173346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:14.332 [2024-07-15 09:41:42.173353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.332 passed 00:15:14.332 Test: blob_relations3 ...passed 00:15:14.332 Test: blobstore_clean_power_failure ...passed 00:15:14.591 Test: blob_delete_snapshot_power_failure ...[2024-07-15 09:41:42.430737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:14.591 [2024-07-15 09:41:42.448592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:14.591 [2024-07-15 09:41:42.466402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:14.591 [2024-07-15 09:41:42.466462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:14.591 [2024-07-15 09:41:42.466471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.591 [2024-07-15 09:41:42.484425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:14.591 [2024-07-15 09:41:42.484476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:14.591 [2024-07-15 09:41:42.484484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:14.591 [2024-07-15 09:41:42.484490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.591 [2024-07-15 09:41:42.502320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:14.591 [2024-07-15 09:41:42.502377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:14.591 [2024-07-15 09:41:42.502386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:14.591 [2024-07-15 09:41:42.502393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.591 [2024-07-15 09:41:42.520179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:14.591 [2024-07-15 09:41:42.520227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.591 [2024-07-15 09:41:42.537952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:14.591 [2024-07-15 09:41:42.538014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.591 [2024-07-15 09:41:42.556328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:14.591 [2024-07-15 09:41:42.556394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:14.591 passed 00:15:14.591 Test: blob_create_snapshot_power_failure ...[2024-07-15 09:41:42.610260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:14.591 [2024-07-15 09:41:42.628407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:14.591 [2024-07-15 09:41:42.665099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:14.591 [2024-07-15 09:41:42.683367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:14.851 passed 00:15:14.851 Test: blob_io_unit ...passed 00:15:14.851 Test: blob_io_unit_compatibility ...passed 00:15:14.851 Test: blob_ext_md_pages ...passed 00:15:14.851 Test: blob_esnap_io_4096_4096 ...passed 00:15:14.851 Test: blob_esnap_io_512_512 ...passed 00:15:14.851 Test: blob_esnap_io_4096_512 ...passed 00:15:14.851 Test: blob_esnap_io_512_4096 ...passed 00:15:15.111 Test: blob_esnap_clone_resize ...passed 00:15:15.111 Suite: blob_bs_copy_extent 00:15:15.111 Test: blob_open ...passed 00:15:15.111 Test: blob_create ...[2024-07-15 09:41:43.066161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:15.111 passed 00:15:15.111 Test: blob_create_loop ...passed 00:15:15.111 Test: blob_create_fail ...[2024-07-15 09:41:43.187680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:15.370 passed 00:15:15.370 Test: blob_create_internal ...passed 00:15:15.370 Test: blob_create_zero_extent ...passed 00:15:15.370 Test: blob_snapshot ...passed 00:15:15.370 Test: blob_clone ...passed 00:15:15.629 Test: blob_inflate ...[2024-07-15 09:41:43.477575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:15.629 passed 00:15:15.629 Test: blob_delete ...passed 00:15:15.629 Test: blob_resize_test ...[2024-07-15 09:41:43.590439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:15.629 passed 00:15:15.629 Test: blob_resize_thin_test ...passed 00:15:15.888 Test: channel_ops ...passed 00:15:15.888 Test: blob_super ...passed 00:15:15.888 Test: blob_rw_verify_iov ...passed 00:15:15.888 Test: blob_unmap ...passed 00:15:15.888 Test: blob_iter ...passed 00:15:16.147 Test: blob_parse_md ...passed 00:15:16.147 Test: bs_load_pending_removal ...passed 00:15:16.147 Test: bs_unload ...[2024-07-15 09:41:44.099873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:16.147 passed 00:15:16.147 Test: bs_usable_clusters ...passed 00:15:16.147 Test: blob_crc ...[2024-07-15 09:41:44.212853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:16.147 [2024-07-15 09:41:44.212937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:16.147 passed 00:15:16.405 Test: blob_flags ...passed 00:15:16.405 Test: bs_version ...passed 00:15:16.405 Test: blob_set_xattrs_test ...[2024-07-15 09:41:44.385179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:16.405 [2024-07-15 09:41:44.385255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:16.405 passed 00:15:16.405 Test: blob_thin_prov_alloc ...passed 00:15:16.665 Test: blob_insert_cluster_msg_test ...passed 00:15:16.665 Test: blob_thin_prov_rw ...passed 00:15:16.665 Test: blob_thin_prov_rle ...passed 00:15:16.665 Test: blob_thin_prov_rw_iov ...passed 00:15:16.665 Test: blob_snapshot_rw ...passed 00:15:16.924 Test: blob_snapshot_rw_iov ...passed 00:15:16.924 Test: blob_inflate_rw ...passed 00:15:16.924 Test: blob_snapshot_freeze_io ...passed 00:15:17.183 Test: blob_operation_split_rw ...passed 00:15:17.183 Test: blob_operation_split_rw_iov ...passed 00:15:17.183 Test: blob_simultaneous_operations ...[2024-07-15 09:41:45.218304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:17.183 [2024-07-15 09:41:45.218405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:17.183 [2024-07-15 09:41:45.219059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:17.183 [2024-07-15 09:41:45.219084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:17.183 [2024-07-15 09:41:45.223090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:17.183 [2024-07-15 09:41:45.223140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:17.183 [2024-07-15 09:41:45.223164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:17.183 [2024-07-15 09:41:45.223171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:17.183 passed 00:15:17.442 Test: blob_persist_test ...passed 00:15:17.442 Test: blob_decouple_snapshot ...passed 00:15:17.442 Test: blob_seek_io_unit ...passed 00:15:17.442 Test: blob_nested_freezes ...passed 00:15:17.701 Test: blob_clone_resize ...passed 00:15:17.701 Test: blob_shallow_copy ...[2024-07-15 09:41:45.603579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:17.701 [2024-07-15 09:41:45.603665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:17.701 [2024-07-15 09:41:45.603675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:17.701 passed 00:15:17.701 Suite: blob_blob_copy_extent 00:15:17.701 Test: blob_write ...passed 00:15:17.701 Test: blob_read ...passed 00:15:17.960 Test: blob_rw_verify ...passed 00:15:17.960 Test: blob_rw_verify_iov_nomem ...passed 00:15:17.960 Test: blob_rw_iov_read_only ...passed 00:15:17.960 Test: blob_xattr ...passed 00:15:17.960 Test: blob_dirty_shutdown ...passed 00:15:18.219 Test: blob_is_degraded ...passed 00:15:18.219 Suite: blob_esnap_bs_copy_extent 00:15:18.219 Test: blob_esnap_create ...passed 00:15:18.219 Test: blob_esnap_thread_add_remove ...passed 00:15:18.220 Test: blob_esnap_clone_snapshot ...passed 00:15:18.480 Test: blob_esnap_clone_inflate ...passed 00:15:18.480 Test: blob_esnap_clone_decouple ...passed 00:15:18.480 Test: blob_esnap_clone_reload ...passed 00:15:18.480 Test: blob_esnap_hotplug ...passed 00:15:18.480 Test: blob_set_parent ...[2024-07-15 09:41:46.522499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:18.480 [2024-07-15 09:41:46.522601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:18.480 [2024-07-15 09:41:46.522621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:18.480 [2024-07-15 09:41:46.522630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:18.480 [2024-07-15 09:41:46.522839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:18.480 passed 00:15:18.740 Test: blob_set_external_parent ...[2024-07-15 09:41:46.578973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:18.740 [2024-07-15 09:41:46.579042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:18.740 [2024-07-15 09:41:46.579052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:18.740 [2024-07-15 09:41:46.579095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:18.740 passed 00:15:18.740 00:15:18.740 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.740 suites 16 16 n/a 0 0 00:15:18.740 tests 376 376 376 0 0 00:15:18.740 asserts 143965 143965 143965 0 n/a 00:15:18.740 00:15:18.740 Elapsed time = 19.539 seconds 00:15:18.740 09:41:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:15:18.740 00:15:18.740 00:15:18.740 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.740 http://cunit.sourceforge.net/ 00:15:18.740 00:15:18.740 00:15:18.740 Suite: blob_bdev 00:15:18.740 Test: create_bs_dev ...passed 00:15:18.740 Test: create_bs_dev_ro ...passed 00:15:18.740 Test: create_bs_dev_rw ...passed 00:15:18.740 Test: claim_bs_dev ...passed 00:15:18.740 Test: claim_bs_dev_ro ...passed 00:15:18.740 Test: deferred_destroy_refs ...passed 00:15:18.740 Test: deferred_destroy_channels ...passed 00:15:18.740 Test: deferred_destroy_threads ...passed 00:15:18.740 00:15:18.740 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.740 suites 1 1 n/a 0 0 00:15:18.740 tests 8 8 8 0 0 00:15:18.740 asserts 119 119 119 0 n/a 00:15:18.740 00:15:18.740 Elapsed time = 0.000 seconds 00:15:18.740 [2024-07-15 09:41:46.609216] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:15:18.740 [2024-07-15 09:41:46.609511] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:15:18.740 09:41:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:15:18.740 00:15:18.740 00:15:18.740 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.740 http://cunit.sourceforge.net/ 00:15:18.740 00:15:18.740 00:15:18.740 Suite: tree 00:15:18.740 Test: blobfs_tree_op_test ...passed 00:15:18.740 00:15:18.740 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.740 suites 1 1 n/a 0 0 00:15:18.740 tests 1 1 1 0 0 00:15:18.740 asserts 27 27 27 0 n/a 00:15:18.740 00:15:18.740 Elapsed time = 0.000 seconds 00:15:18.740 09:41:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:15:18.740 00:15:18.740 00:15:18.740 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.740 http://cunit.sourceforge.net/ 00:15:18.740 00:15:18.740 00:15:18.740 Suite: blobfs_async_ut 00:15:18.740 Test: fs_init ...passed 00:15:18.740 Test: fs_open ...passed 00:15:18.740 Test: fs_create ...passed 00:15:18.740 Test: fs_truncate ...passed 00:15:18.740 Test: fs_rename ...[2024-07-15 09:41:46.760334] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:15:18.740 passed 00:15:18.740 Test: fs_rw_async ...passed 00:15:18.740 Test: fs_writev_readv_async ...passed 00:15:18.740 Test: tree_find_buffer_ut ...passed 00:15:18.740 Test: channel_ops ...passed 00:15:18.999 Test: channel_ops_sync ...passed 00:15:18.999 00:15:18.999 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.999 suites 1 1 n/a 0 0 00:15:18.999 tests 10 10 10 0 0 00:15:18.999 asserts 292 292 292 0 n/a 00:15:18.999 00:15:18.999 Elapsed time = 0.211 seconds 00:15:18.999 09:41:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:15:18.999 00:15:18.999 00:15:18.999 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.999 http://cunit.sourceforge.net/ 00:15:18.999 00:15:18.999 00:15:18.999 Suite: blobfs_sync_ut 00:15:18.999 Test: cache_read_after_write ...[2024-07-15 09:41:46.906986] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:15:18.999 passed 00:15:18.999 Test: file_length ...passed 00:15:18.999 Test: append_write_to_extend_blob ...passed 00:15:18.999 Test: partial_buffer ...passed 00:15:18.999 Test: cache_write_null_buffer ...passed 00:15:18.999 Test: fs_create_sync ...passed 00:15:18.999 Test: fs_rename_sync ...passed 00:15:18.999 Test: cache_append_no_cache ...passed 00:15:18.999 Test: fs_delete_file_without_close ...passed 00:15:18.999 00:15:18.999 Run Summary: Type Total Ran Passed Failed Inactive 00:15:18.999 suites 1 1 n/a 0 0 00:15:18.999 tests 9 9 9 0 0 00:15:18.999 asserts 345 345 345 0 n/a 00:15:18.999 00:15:18.999 Elapsed time = 0.422 seconds 00:15:18.999 09:41:47 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:15:18.999 00:15:18.999 00:15:18.999 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.999 http://cunit.sourceforge.net/ 00:15:18.999 00:15:18.999 00:15:18.999 Suite: blobfs_bdev_ut 00:15:18.999 Test: spdk_blobfs_bdev_detect_test ...passed 00:15:19.000 Test: spdk_blobfs_bdev_create_test ...passed 00:15:19.000 Test: spdk_blobfs_bdev_mount_test ...passed 00:15:19.000 00:15:19.000 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.000 suites 1 1 n/a 0 0 00:15:19.000 tests 3 3 3 0 0 00:15:19.000 asserts 9 9 9 0 n/a 00:15:19.000 00:15:19.000 Elapsed time = 0.000 seconds[2024-07-15 09:41:47.076211] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:15:19.000 [2024-07-15 09:41:47.076504] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:15:19.000 00:15:19.000 00:15:19.000 real 0m20.044s 00:15:19.000 user 0m20.019s 00:15:19.000 sys 0m0.230s 00:15:19.000 ************************************ 00:15:19.000 END TEST unittest_blob_blobfs 00:15:19.000 ************************************ 00:15:19.000 09:41:47 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.000 09:41:47 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:15:19.260 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.260 09:41:47 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:15:19.260 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.260 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.260 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.260 ************************************ 00:15:19.260 START TEST unittest_event 00:15:19.260 ************************************ 00:15:19.260 09:41:47 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:15:19.260 09:41:47 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:15:19.260 00:15:19.260 00:15:19.260 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.260 http://cunit.sourceforge.net/ 00:15:19.260 00:15:19.260 00:15:19.260 Suite: app_suite 00:15:19.260 Test: test_spdk_app_parse_args ...app_ut [options] 00:15:19.260 00:15:19.260 CPU options: 00:15:19.260 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:19.260 (like [0,1,10]) 00:15:19.260 --lcores lcore to CPU mapping list. The list is in the format: 00:15:19.260 [<,lcores[@CPUs]>...] 00:15:19.260 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:19.260 app_ut: invalid option -- z 00:15:19.260 Within the group, '-' is used for range separator, 00:15:19.260 ',' is used for single number separator. 00:15:19.260 '( )' can be omitted for single element group, 00:15:19.260 '@' can be omitted if cpus and lcores have the same value 00:15:19.260 --disable-cpumask-locks Disable CPU core lock files. 00:15:19.260 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:19.260 pollers in the app support interrupt mode) 00:15:19.260 -p, --main-core main (primary) core for DPDK 00:15:19.260 00:15:19.260 Configuration options: 00:15:19.260 -c, --config, --json JSON config file 00:15:19.260 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:19.260 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:19.260 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:19.260 --rpcs-allowed comma-separated list of permitted RPCS 00:15:19.260 --json-ignore-init-errors don't exit on invalid config entry 00:15:19.260 00:15:19.260 Memory options: 00:15:19.260 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:19.260 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:19.260 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:19.260 -R, --huge-unlink unlink huge files after initialization 00:15:19.260 -n, --mem-channels number of memory channels used for DPDK 00:15:19.260 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:15:19.260 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:19.260 --no-huge run without using hugepages 00:15:19.260 -i, --shm-id shared memory ID (optional) 00:15:19.260 -g, --single-file-segments force creating just one hugetlbfs file 00:15:19.260 00:15:19.260 PCI options: 00:15:19.260 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:19.260 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:19.260 -u, --no-pci disable PCI access 00:15:19.260 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:19.260 00:15:19.260 Log options: 00:15:19.260 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:15:19.260 --silence-noticelog disable notice level logging to stderr 00:15:19.260 00:15:19.260 Trace options: 00:15:19.260 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:19.260 setting 0 to disable trace (default 32768) 00:15:19.260 Tracepoints vary in size and can use more than one trace entry. 00:15:19.260 -e, --tpoint-group [:] 00:15:19.260 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:15:19.260 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:19.260 a tracepoint group. First tpoint inside a group can be enabled by 00:15:19.260 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:19.260 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:19.260 in /include/spdk_internal/trace_defs.h 00:15:19.260 00:15:19.260 Other options: 00:15:19.260 -h, --help show this usage 00:15:19.260 -v, --version print SPDK version 00:15:19.260 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:19.260 --env-context Opaque context for use of the env implementation 00:15:19.260 app_ut [options] 00:15:19.260 00:15:19.260 CPU options: 00:15:19.261 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:19.261 (like [0,1,10]) 00:15:19.261 --lcores lcore to CPU mapping list. The list is in the format: 00:15:19.261 [<,lcores[@CPUs]>...] 00:15:19.261 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:19.261 Within the group, '-' is used for range separator, 00:15:19.261 ',' is used for single number separator. 00:15:19.261 '( )' can be omitted for single element group, 00:15:19.261 '@' can be omitted if cpus and lcores have the same value 00:15:19.261 --disable-cpumask-locks Disable CPU core lock files. 00:15:19.261 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:19.261 pollers in the app support interrupt mode) 00:15:19.261 -p, --main-core main (primary) core for DPDK 00:15:19.261 00:15:19.261 Configuration options: 00:15:19.261 -c, --config, --json JSON config file 00:15:19.261 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:19.261 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:19.261 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:19.261 --rpcs-allowed comma-separated list of permitted RPCS 00:15:19.261 --json-ignore-init-errors don't exit on invalid config entry 00:15:19.261 00:15:19.261 Memory options: 00:15:19.261 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:19.261 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:19.261 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:19.261 -R, --huge-unlink unlink huge files after initialization 00:15:19.261 -n, --mem-channels number of memory channels used for DPDK 00:15:19.261 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:15:19.261 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:19.261 --no-huge run without using hugepages 00:15:19.261 -i, --shm-id shared memory ID (optional) 00:15:19.261 -g, --single-file-segments force creating just one hugetlbfs file 00:15:19.261 00:15:19.261 PCI options: 00:15:19.261 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:19.261 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:19.261 -u, --no-pci disable PCI access 00:15:19.261 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:19.261 00:15:19.261 Log options: 00:15:19.261 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:15:19.261 --silence-noticelog disable notice level logging to stderr 00:15:19.261 00:15:19.261 Trace options: 00:15:19.261 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:19.261 setting 0 to disable trace (default 32768) 00:15:19.261 Tracepoints vary in size and can use more than one trace entry. 00:15:19.261 -e, --tpoint-group [:] 00:15:19.261 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:15:19.261 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:19.261 a tracepoint group. First tpoint inside a group can be enabled by 00:15:19.261 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:19.261 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:19.261 in /include/spdk_internal/trace_defs.h 00:15:19.261 00:15:19.261 Other options: 00:15:19.261 -h, --help show this usage 00:15:19.261 -v, --version print SPDK version 00:15:19.261 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:19.261 --env-context Opaque context for use of the env implementation 00:15:19.261 app_ut: unrecognized option `--test-long-opt' 00:15:19.261 app_ut [options] 00:15:19.261 00:15:19.261 CPU options: 00:15:19.261 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:19.261 (like [0,1,10]) 00:15:19.261 --lcores lcore to CPU mapping list. The list is in the format: 00:15:19.261 [<,lcores[@CPUs]>...] 00:15:19.261 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:19.261 Within the group, '-' is used for range separator, 00:15:19.261 ',' is used for single number separator. 00:15:19.261 '( )' can be omitted for single element group, 00:15:19.261 '@' can be omitted if cpus and lcores have the same value 00:15:19.261 --disable-cpumask-locks Disable CPU core lock files. 00:15:19.261 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:19.261 pollers in the app support interrupt mode) 00:15:19.261 -p, --main-core main (primary) core for DPDK 00:15:19.261 00:15:19.261 Configuration options: 00:15:19.261 -c, --config, --json JSON config file 00:15:19.261 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:19.261 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:19.261 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:19.261 --rpcs-allowed comma-separated list of permitted RPCS 00:15:19.261 --json-ignore-init-errors don't exit on invalid config entry 00:15:19.261 00:15:19.261 Memory options: 00:15:19.261 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:19.261 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:19.261 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:19.261 -R, --huge-unlink unlink huge files after initialization 00:15:19.261 -n, --mem-channels number of memory channels used for DPDK 00:15:19.261 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:15:19.261 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:19.261 --no-huge run without using hugepages 00:15:19.261 -i, --shm-id shared memory ID (optional) 00:15:19.261 -g, --single-file-segments force creating just one hugetlbfs file 00:15:19.261 00:15:19.261 PCI options: 00:15:19.261 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:19.261 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:19.261 -u, --no-pci disable PCI access 00:15:19.261 [2024-07-15 09:41:47.127998] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:15:19.261 [2024-07-15 09:41:47.128364] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1372:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:15:19.261 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:19.261 00:15:19.261 Log options: 00:15:19.261 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:15:19.261 --silence-noticelog disable notice level logging to stderr 00:15:19.261 00:15:19.261 Trace options: 00:15:19.261 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:19.261 setting 0 to disable trace (default 32768) 00:15:19.261 Tracepoints vary in size and can use more than one trace entry. 00:15:19.261 -e, --tpoint-group [:] 00:15:19.261 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:15:19.261 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:19.261 a tracepoint group. First tpoint inside a group can be enabled by 00:15:19.261 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:19.261 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:19.261 in /include/spdk_internal/trace_defs.h 00:15:19.261 00:15:19.261 Other options: 00:15:19.261 -h, --help show this usage 00:15:19.261 -v, --version print SPDK version 00:15:19.261 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:19.261 --env-context Opaque context for use of the env implementation 00:15:19.261 passed 00:15:19.261 00:15:19.261 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.261 suites 1 1 n/a 0 0 00:15:19.261 tests 1 1 1 0 0 00:15:19.261 asserts 8 8 8 0 n/a 00:15:19.261 00:15:19.261 Elapsed time = 0.000 seconds 00:15:19.261 [2024-07-15 09:41:47.128521] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1277:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:15:19.261 09:41:47 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:15:19.261 00:15:19.261 00:15:19.261 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.261 http://cunit.sourceforge.net/ 00:15:19.261 00:15:19.261 00:15:19.261 Suite: app_suite 00:15:19.261 Test: test_create_reactor ...passed 00:15:19.261 Test: test_init_reactors ...passed 00:15:19.261 Test: test_event_call ...passed 00:15:19.261 Test: test_schedule_thread ...passed 00:15:19.261 Test: test_reschedule_thread ...passed 00:15:19.261 Test: test_bind_thread ...passed 00:15:19.261 Test: test_for_each_reactor ...passed 00:15:19.261 Test: test_reactor_stats ...passed 00:15:19.261 Test: test_scheduler ...passed 00:15:19.261 Test: test_governor ...passed 00:15:19.261 00:15:19.261 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.261 suites 1 1 n/a 0 0 00:15:19.261 tests 10 10 10 0 0 00:15:19.261 asserts 336 336 336 0 n/a 00:15:19.261 00:15:19.261 Elapsed time = 0.008 seconds 00:15:19.261 00:15:19.261 real 0m0.024s 00:15:19.261 user 0m0.002s 00:15:19.261 sys 0m0.020s 00:15:19.261 09:41:47 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.261 ************************************ 00:15:19.261 END TEST unittest_event 00:15:19.261 ************************************ 00:15:19.261 09:41:47 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:15:19.261 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.261 09:41:47 unittest -- unit/unittest.sh@235 -- # uname -s 00:15:19.261 09:41:47 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:15:19.261 09:41:47 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:15:19.261 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.261 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.261 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 ************************************ 00:15:19.262 START TEST unittest_accel 00:15:19.262 ************************************ 00:15:19.262 09:41:47 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:15:19.262 00:15:19.262 00:15:19.262 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.262 http://cunit.sourceforge.net/ 00:15:19.262 00:15:19.262 00:15:19.262 Suite: accel_sequence 00:15:19.262 Test: test_sequence_fill_copy ...passed 00:15:19.262 Test: test_sequence_abort ...passed 00:15:19.262 Test: test_sequence_append_error ...passed 00:15:19.262 Test: test_sequence_completion_error ...[2024-07-15 09:41:47.204021] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x35a6e60ce080 00:15:19.262 passed 00:15:19.262 Test: test_sequence_decompress ...[2024-07-15 09:41:47.204469] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x35a6e60ce080 00:15:19.262 [2024-07-15 09:41:47.204497] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x35a6e60ce080 00:15:19.262 [2024-07-15 09:41:47.204519] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x35a6e60ce080 00:15:19.262 passed 00:15:19.262 Test: test_sequence_reverse ...passed 00:15:19.262 Test: test_sequence_copy_elision ...passed 00:15:19.262 Test: test_sequence_accel_buffers ...passed 00:15:19.262 Test: test_sequence_memory_domain ...passed 00:15:19.262 Test: test_sequence_module_memory_domain ...passed 00:15:19.262 Test: test_sequence_crypto ...[2024-07-15 09:41:47.206575] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1748:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:15:19.262 [2024-07-15 09:41:47.206684] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1787:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:15:19.262 passed 00:15:19.262 Test: test_sequence_driver ...[2024-07-15 09:41:47.207892] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1895:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x35a6e60ce400 using driver: ut 00:15:19.262 passed 00:15:19.262 Test: test_sequence_same_iovs ...passed 00:15:19.262 Test: test_sequence_crc32 ...[2024-07-15 09:41:47.207983] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x35a6e60ce400 through driver: ut 00:15:19.262 passed 00:15:19.262 Suite: accel 00:15:19.262 Test: test_spdk_accel_task_complete ...passed 00:15:19.262 Test: test_get_task ...passed 00:15:19.262 Test: test_spdk_accel_submit_copy ...passed 00:15:19.262 Test: test_spdk_accel_submit_dualcast ...passed 00:15:19.262 Test: test_spdk_accel_submit_compare ...passed 00:15:19.262 Test: test_spdk_accel_submit_fill ...passed 00:15:19.262 Test: test_spdk_accel_submit_crc32c ...[2024-07-15 09:41:47.208901] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:15:19.262 [2024-07-15 09:41:47.208949] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:15:19.262 passed 00:15:19.262 Test: test_spdk_accel_submit_crc32cv ...passed 00:15:19.262 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:15:19.262 Test: test_spdk_accel_submit_xor ...passed 00:15:19.262 Test: test_spdk_accel_module_find_by_name ...passed 00:15:19.262 Test: test_spdk_accel_module_register ...passed 00:15:19.262 00:15:19.262 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.262 suites 2 2 n/a 0 0 00:15:19.262 tests 26 26 26 0 0 00:15:19.262 asserts 830 830 830 0 n/a 00:15:19.262 00:15:19.262 Elapsed time = 0.008 seconds 00:15:19.262 00:15:19.262 real 0m0.020s 00:15:19.262 user 0m0.019s 00:15:19.262 sys 0m0.001s 00:15:19.262 09:41:47 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.262 09:41:47 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 ************************************ 00:15:19.262 END TEST unittest_accel 00:15:19.262 ************************************ 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.262 09:41:47 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 ************************************ 00:15:19.262 START TEST unittest_ioat 00:15:19.262 ************************************ 00:15:19.262 09:41:47 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:15:19.262 00:15:19.262 00:15:19.262 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.262 http://cunit.sourceforge.net/ 00:15:19.262 00:15:19.262 00:15:19.262 Suite: ioat 00:15:19.262 Test: ioat_state_check ...passed 00:15:19.262 00:15:19.262 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.262 suites 1 1 n/a 0 0 00:15:19.262 tests 1 1 1 0 0 00:15:19.262 asserts 32 32 32 0 n/a 00:15:19.262 00:15:19.262 Elapsed time = 0.000 seconds 00:15:19.262 00:15:19.262 real 0m0.006s 00:15:19.262 user 0m0.000s 00:15:19.262 sys 0m0.005s 00:15:19.262 09:41:47 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.262 09:41:47 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 ************************************ 00:15:19.262 END TEST unittest_ioat 00:15:19.262 ************************************ 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.262 09:41:47 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:19.262 09:41:47 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.262 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 ************************************ 00:15:19.262 START TEST unittest_idxd_user 00:15:19.262 ************************************ 00:15:19.262 09:41:47 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:15:19.262 00:15:19.262 00:15:19.262 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.262 http://cunit.sourceforge.net/ 00:15:19.262 00:15:19.262 00:15:19.262 Suite: idxd_user 00:15:19.262 Test: test_idxd_wait_cmd ...passed 00:15:19.262 Test: test_idxd_reset_dev ...passed 00:15:19.262 Test: test_idxd_group_config ...passed 00:15:19.262 Test: test_idxd_wq_config ...passed 00:15:19.262 00:15:19.262 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.262 suites 1 1 n/a 0 0 00:15:19.262 tests 4 4 4 0 0 00:15:19.262 asserts 20 20 20 0 n/a 00:15:19.262 00:15:19.262 Elapsed time = 0.000 seconds 00:15:19.262 [2024-07-15 09:41:47.313939] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:15:19.262 [2024-07-15 09:41:47.314292] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:15:19.262 [2024-07-15 09:41:47.314336] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:15:19.262 [2024-07-15 09:41:47.314352] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:15:19.262 00:15:19.262 real 0m0.007s 00:15:19.262 user 0m0.010s 00:15:19.262 sys 0m0.005s 00:15:19.262 09:41:47 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.262 ************************************ 00:15:19.262 END TEST unittest_idxd_user 00:15:19.262 ************************************ 00:15:19.262 09:41:47 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:15:19.523 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.523 09:41:47 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:15:19.523 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.523 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.523 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.523 ************************************ 00:15:19.523 START TEST unittest_iscsi 00:15:19.523 ************************************ 00:15:19.523 09:41:47 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:15:19.523 09:41:47 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:15:19.523 00:15:19.523 00:15:19.523 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.523 http://cunit.sourceforge.net/ 00:15:19.523 00:15:19.523 00:15:19.523 Suite: conn_suite 00:15:19.523 Test: read_task_split_in_order_case ...passed 00:15:19.523 Test: read_task_split_reverse_order_case ...passed 00:15:19.523 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:15:19.523 Test: process_non_read_task_completion_test ...passed 00:15:19.523 Test: free_tasks_on_connection ...passed 00:15:19.523 Test: free_tasks_with_queued_datain ...passed 00:15:19.523 Test: abort_queued_datain_task_test ...passed 00:15:19.523 Test: abort_queued_datain_tasks_test ...passed 00:15:19.523 00:15:19.523 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.523 suites 1 1 n/a 0 0 00:15:19.523 tests 8 8 8 0 0 00:15:19.523 asserts 230 230 230 0 n/a 00:15:19.523 00:15:19.523 Elapsed time = 0.000 seconds 00:15:19.523 09:41:47 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:15:19.523 00:15:19.523 00:15:19.523 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.523 http://cunit.sourceforge.net/ 00:15:19.523 00:15:19.523 00:15:19.523 Suite: iscsi_suite 00:15:19.523 Test: param_negotiation_test ...passed 00:15:19.523 Test: list_negotiation_test ...passed 00:15:19.523 Test: parse_valid_test ...passed 00:15:19.523 Test: parse_invalid_test ...[2024-07-15 09:41:47.377576] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:15:19.523 [2024-07-15 09:41:47.377932] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:15:19.523 [2024-07-15 09:41:47.377962] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:15:19.523 [2024-07-15 09:41:47.378011] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:15:19.523 [2024-07-15 09:41:47.378038] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:15:19.523 [2024-07-15 09:41:47.378064] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:15:19.523 passed 00:15:19.523 00:15:19.523 [2024-07-15 09:41:47.378086] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:15:19.523 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.523 suites 1 1 n/a 0 0 00:15:19.523 tests 4 4 4 0 0 00:15:19.523 asserts 161 161 161 0 n/a 00:15:19.523 00:15:19.523 Elapsed time = 0.000 seconds 00:15:19.523 09:41:47 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:15:19.523 00:15:19.523 00:15:19.523 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.523 http://cunit.sourceforge.net/ 00:15:19.523 00:15:19.523 00:15:19.523 Suite: iscsi_target_node_suite 00:15:19.523 Test: add_lun_test_cases ...[2024-07-15 09:41:47.387706] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:15:19.523 passed 00:15:19.523 Test: allow_any_allowed ...passed 00:15:19.523 Test: allow_ipv6_allowed ...passed 00:15:19.523 Test: allow_ipv6_denied ...passed 00:15:19.523 Test: allow_ipv6_invalid ...passed 00:15:19.523 Test: allow_ipv4_allowed ...passed 00:15:19.523 Test: allow_ipv4_denied ...passed 00:15:19.523 Test: allow_ipv4_invalid ...passed 00:15:19.523 Test: node_access_allowed ...passed 00:15:19.523 Test: node_access_denied_by_empty_netmask ...passed 00:15:19.523 Test: node_access_multi_initiator_groups_cases ...passed[2024-07-15 09:41:47.388195] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:15:19.523 [2024-07-15 09:41:47.388234] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:15:19.523 [2024-07-15 09:41:47.388276] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:15:19.523 [2024-07-15 09:41:47.388318] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:15:19.523 00:15:19.523 Test: allow_iscsi_name_multi_maps_case ...passed 00:15:19.523 Test: chap_param_test_cases ...[2024-07-15 09:41:47.388588] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:15:19.523 [2024-07-15 09:41:47.388625] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:15:19.523 passed 00:15:19.523 00:15:19.523 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.523 suites 1 1 n/a 0 0 00:15:19.523 tests 13 13 13 0 0 00:15:19.523 asserts 50 50 50 0 n/a 00:15:19.523 00:15:19.523 Elapsed time = 0.008 seconds[2024-07-15 09:41:47.388672] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:15:19.523 [2024-07-15 09:41:47.388715] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:15:19.523 [2024-07-15 09:41:47.388762] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:15:19.523 00:15:19.523 09:41:47 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:15:19.523 00:15:19.523 00:15:19.523 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.523 http://cunit.sourceforge.net/ 00:15:19.523 00:15:19.523 00:15:19.523 Suite: iscsi_suite 00:15:19.524 Test: op_login_check_target_test ...[2024-07-15 09:41:47.400424] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:15:19.524 passed 00:15:19.524 Test: op_login_session_normal_test ...[2024-07-15 09:41:47.401044] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:15:19.524 [2024-07-15 09:41:47.401103] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:15:19.524 [2024-07-15 09:41:47.401150] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:15:19.524 [2024-07-15 09:41:47.401264] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:15:19.524 [2024-07-15 09:41:47.401312] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:15:19.524 [2024-07-15 09:41:47.401395] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:15:19.524 [2024-07-15 09:41:47.401436] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:15:19.524 passed 00:15:19.524 Test: maxburstlength_test ...passed 00:15:19.524 Test: underflow_for_read_transfer_test ...passed 00:15:19.524 Test: underflow_for_zero_read_transfer_test ...[2024-07-15 09:41:47.401601] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:15:19.524 [2024-07-15 09:41:47.401652] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:15:19.524 passed 00:15:19.524 Test: underflow_for_request_sense_test ...passed 00:15:19.524 Test: underflow_for_check_condition_test ...passed 00:15:19.524 Test: add_transfer_task_test ...passed 00:15:19.524 Test: get_transfer_task_test ...passed 00:15:19.524 Test: del_transfer_task_test ...passed 00:15:19.524 Test: clear_all_transfer_tasks_test ...passed 00:15:19.524 Test: build_iovs_test ...passed 00:15:19.524 Test: build_iovs_with_md_test ...passed 00:15:19.524 Test: pdu_hdr_op_login_test ...[2024-07-15 09:41:47.402043] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:15:19.524 passed 00:15:19.524 Test: pdu_hdr_op_text_test ...[2024-07-15 09:41:47.402097] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:15:19.524 [2024-07-15 09:41:47.402143] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:15:19.524 [2024-07-15 09:41:47.402203] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:15:19.524 [2024-07-15 09:41:47.402251] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:15:19.524 passed 00:15:19.524 Test: pdu_hdr_op_logout_test ...passed 00:15:19.524 Test: pdu_hdr_op_scsi_test ...[2024-07-15 09:41:47.402299] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:15:19.524 [2024-07-15 09:41:47.402352] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:15:19.524 [2024-07-15 09:41:47.402407] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:15:19.524 [2024-07-15 09:41:47.402454] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:15:19.524 [2024-07-15 09:41:47.402495] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:15:19.524 [2024-07-15 09:41:47.402541] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:15:19.524 [2024-07-15 09:41:47.402583] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:15:19.524 passed 00:15:19.524 Test: pdu_hdr_op_task_mgmt_test ...passed 00:15:19.524 Test: pdu_hdr_op_nopout_test ...[2024-07-15 09:41:47.402640] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:15:19.524 [2024-07-15 09:41:47.402692] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:15:19.524 [2024-07-15 09:41:47.402738] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:15:19.524 [2024-07-15 09:41:47.402797] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:15:19.524 passed 00:15:19.524 Test: pdu_hdr_op_data_test ...[2024-07-15 09:41:47.402846] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:15:19.524 [2024-07-15 09:41:47.402891] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:15:19.524 [2024-07-15 09:41:47.402937] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:15:19.524 [2024-07-15 09:41:47.402988] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:15:19.524 [2024-07-15 09:41:47.403041] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:15:19.524 [2024-07-15 09:41:47.403082] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:15:19.524 [2024-07-15 09:41:47.403131] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:15:19.524 passed 00:15:19.524 Test: empty_text_with_cbit_test ...passed 00:15:19.524 Test: pdu_payload_read_test ...[2024-07-15 09:41:47.403187] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:15:19.524 [2024-07-15 09:41:47.403228] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:15:19.524 [2024-07-15 09:41:47.403298] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:15:19.524 [2024-07-15 09:41:47.404841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:15:19.524 passed 00:15:19.524 Test: data_out_pdu_sequence_test ...passed 00:15:19.524 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:15:19.524 00:15:19.524 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.524 suites 1 1 n/a 0 0 00:15:19.524 tests 24 24 24 0 0 00:15:19.524 asserts 150253 150253 150253 0 n/a 00:15:19.524 00:15:19.524 Elapsed time = 0.016 seconds 00:15:19.524 09:41:47 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:15:19.524 00:15:19.524 00:15:19.524 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.524 http://cunit.sourceforge.net/ 00:15:19.524 00:15:19.524 00:15:19.524 Suite: init_grp_suite 00:15:19.524 Test: create_initiator_group_success_case ...passed 00:15:19.524 Test: find_initiator_group_success_case ...passed 00:15:19.524 Test: register_initiator_group_twice_case ...passed 00:15:19.524 Test: add_initiator_name_success_case ...passed 00:15:19.524 Test: add_initiator_name_fail_case ...passed 00:15:19.524 Test: delete_all_initiator_names_success_case ...passed 00:15:19.524 Test: add_netmask_success_case ...passed 00:15:19.524 Test: add_netmask_fail_case ...passed 00:15:19.524 Test: delete_all_netmasks_success_case ...passed 00:15:19.524 Test: initiator_name_overwrite_all_to_any_case ...passed 00:15:19.524 Test: netmask_overwrite_all_to_any_case ...passed 00:15:19.524 Test: add_delete_initiator_names_case ...passed 00:15:19.524 Test: add_duplicated_initiator_names_case ...passed 00:15:19.524 Test: delete_nonexisting_initiator_names_case ...passed 00:15:19.524 Test: add_delete_netmasks_case ...passed 00:15:19.524 Test: add_duplicated_netmasks_case ...passed 00:15:19.524 Test: delete_nonexisting_netmasks_case ...passed 00:15:19.524 00:15:19.524 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.524 suites 1 1 n/a 0 0 00:15:19.524 tests 17 17 17 0 0 00:15:19.524 asserts 108 108 108 0 n/a 00:15:19.524 00:15:19.524 Elapsed time = 0.000 seconds 00:15:19.524 [2024-07-15 09:41:47.420585] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:15:19.524 [2024-07-15 09:41:47.420910] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:15:19.524 09:41:47 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:15:19.524 00:15:19.524 00:15:19.524 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.524 http://cunit.sourceforge.net/ 00:15:19.524 00:15:19.524 00:15:19.524 Suite: portal_grp_suite 00:15:19.524 Test: portal_create_ipv4_normal_case ...passed 00:15:19.524 Test: portal_create_ipv6_normal_case ...passed 00:15:19.524 Test: portal_create_ipv4_wildcard_case ...passed 00:15:19.524 Test: portal_create_ipv6_wildcard_case ...passed 00:15:19.524 Test: portal_create_twice_case ...[2024-07-15 09:41:47.432103] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:15:19.525 passed 00:15:19.525 Test: portal_grp_register_unregister_case ...passed 00:15:19.525 Test: portal_grp_register_twice_case ...passed 00:15:19.525 Test: portal_grp_add_delete_case ...passed 00:15:19.525 Test: portal_grp_add_delete_twice_case ...passed 00:15:19.525 00:15:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.525 suites 1 1 n/a 0 0 00:15:19.525 tests 9 9 9 0 0 00:15:19.525 asserts 44 44 44 0 n/a 00:15:19.525 00:15:19.525 Elapsed time = 0.008 seconds 00:15:19.525 00:15:19.525 real 0m0.069s 00:15:19.525 user 0m0.038s 00:15:19.525 sys 0m0.030s 00:15:19.525 09:41:47 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.525 ************************************ 00:15:19.525 END TEST unittest_iscsi 00:15:19.525 ************************************ 00:15:19.525 09:41:47 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.525 09:41:47 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 ************************************ 00:15:19.525 START TEST unittest_json 00:15:19.525 ************************************ 00:15:19.525 09:41:47 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:15:19.525 09:41:47 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:15:19.525 00:15:19.525 00:15:19.525 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.525 http://cunit.sourceforge.net/ 00:15:19.525 00:15:19.525 00:15:19.525 Suite: json 00:15:19.525 Test: test_parse_literal ...passed 00:15:19.525 Test: test_parse_string_simple ...passed 00:15:19.525 Test: test_parse_string_control_chars ...passed 00:15:19.525 Test: test_parse_string_utf8 ...passed 00:15:19.525 Test: test_parse_string_escapes_twochar ...passed 00:15:19.525 Test: test_parse_string_escapes_unicode ...passed 00:15:19.525 Test: test_parse_number ...passed 00:15:19.525 Test: test_parse_array ...passed 00:15:19.525 Test: test_parse_object ...passed 00:15:19.525 Test: test_parse_nesting ...passed 00:15:19.525 Test: test_parse_comment ...passed 00:15:19.525 00:15:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.525 suites 1 1 n/a 0 0 00:15:19.525 tests 11 11 11 0 0 00:15:19.525 asserts 1516 1516 1516 0 n/a 00:15:19.525 00:15:19.525 Elapsed time = 0.000 seconds 00:15:19.525 09:41:47 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:15:19.525 00:15:19.525 00:15:19.525 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.525 http://cunit.sourceforge.net/ 00:15:19.525 00:15:19.525 00:15:19.525 Suite: json 00:15:19.525 Test: test_strequal ...passed 00:15:19.525 Test: test_num_to_uint16 ...passed 00:15:19.525 Test: test_num_to_int32 ...passed 00:15:19.525 Test: test_num_to_uint64 ...passed 00:15:19.525 Test: test_decode_object ...passed 00:15:19.525 Test: test_decode_array ...passed 00:15:19.525 Test: test_decode_bool ...passed 00:15:19.525 Test: test_decode_uint16 ...passed 00:15:19.525 Test: test_decode_int32 ...passed 00:15:19.525 Test: test_decode_uint32 ...passed 00:15:19.525 Test: test_decode_uint64 ...passed 00:15:19.525 Test: test_decode_string ...passed 00:15:19.525 Test: test_decode_uuid ...passed 00:15:19.525 Test: test_find ...passed 00:15:19.525 Test: test_find_array ...passed 00:15:19.525 Test: test_iterating ...passed 00:15:19.525 Test: test_free_object ...passed 00:15:19.525 00:15:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.525 suites 1 1 n/a 0 0 00:15:19.525 tests 17 17 17 0 0 00:15:19.525 asserts 236 236 236 0 n/a 00:15:19.525 00:15:19.525 Elapsed time = 0.000 seconds 00:15:19.525 09:41:47 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:15:19.525 00:15:19.525 00:15:19.525 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.525 http://cunit.sourceforge.net/ 00:15:19.525 00:15:19.525 00:15:19.525 Suite: json 00:15:19.525 Test: test_write_literal ...passed 00:15:19.525 Test: test_write_string_simple ...passed 00:15:19.525 Test: test_write_string_escapes ...passed 00:15:19.525 Test: test_write_string_utf16le ...passed 00:15:19.525 Test: test_write_number_int32 ...passed 00:15:19.525 Test: test_write_number_uint32 ...passed 00:15:19.525 Test: test_write_number_uint128 ...passed 00:15:19.525 Test: test_write_string_number_uint128 ...passed 00:15:19.525 Test: test_write_number_int64 ...passed 00:15:19.525 Test: test_write_number_uint64 ...passed 00:15:19.525 Test: test_write_number_double ...passed 00:15:19.525 Test: test_write_uuid ...passed 00:15:19.525 Test: test_write_array ...passed 00:15:19.525 Test: test_write_object ...passed 00:15:19.525 Test: test_write_nesting ...passed 00:15:19.525 Test: test_write_val ...passed 00:15:19.525 00:15:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.525 suites 1 1 n/a 0 0 00:15:19.525 tests 16 16 16 0 0 00:15:19.525 asserts 918 918 918 0 n/a 00:15:19.525 00:15:19.525 Elapsed time = 0.000 seconds 00:15:19.525 09:41:47 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:15:19.525 00:15:19.525 00:15:19.525 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.525 http://cunit.sourceforge.net/ 00:15:19.525 00:15:19.525 00:15:19.525 Suite: jsonrpc 00:15:19.525 Test: test_parse_request ...passed 00:15:19.525 Test: test_parse_request_streaming ...passed 00:15:19.525 00:15:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.525 suites 1 1 n/a 0 0 00:15:19.525 tests 2 2 2 0 0 00:15:19.525 asserts 289 289 289 0 n/a 00:15:19.525 00:15:19.525 Elapsed time = 0.000 seconds 00:15:19.525 00:15:19.525 real 0m0.036s 00:15:19.525 user 0m0.022s 00:15:19.525 sys 0m0.024s 00:15:19.525 09:41:47 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.525 09:41:47 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 ************************************ 00:15:19.525 END TEST unittest_json 00:15:19.525 ************************************ 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.525 09:41:47 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.525 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 ************************************ 00:15:19.525 START TEST unittest_rpc 00:15:19.525 ************************************ 00:15:19.525 09:41:47 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:15:19.525 09:41:47 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:15:19.525 00:15:19.525 00:15:19.525 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.525 http://cunit.sourceforge.net/ 00:15:19.525 00:15:19.525 00:15:19.525 Suite: rpc 00:15:19.525 Test: test_jsonrpc_handler ...passed 00:15:19.525 Test: test_spdk_rpc_is_method_allowed ...passed 00:15:19.525 Test: test_rpc_get_methods ...passed 00:15:19.525 Test: test_rpc_spdk_get_version ...passed 00:15:19.525 Test: test_spdk_rpc_listen_close ...passed 00:15:19.525 Test: test_rpc_run_multiple_servers ...passed 00:15:19.525 00:15:19.525 [2024-07-15 09:41:47.579035] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:15:19.525 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.525 suites 1 1 n/a 0 0 00:15:19.525 tests 6 6 6 0 0 00:15:19.525 asserts 23 23 23 0 n/a 00:15:19.525 00:15:19.525 Elapsed time = 0.000 seconds 00:15:19.525 00:15:19.525 real 0m0.007s 00:15:19.525 user 0m0.006s 00:15:19.525 sys 0m0.001s 00:15:19.525 09:41:47 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.525 09:41:47 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.525 ************************************ 00:15:19.525 END TEST unittest_rpc 00:15:19.525 ************************************ 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.785 09:41:47 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.785 ************************************ 00:15:19.785 START TEST unittest_notify 00:15:19.785 ************************************ 00:15:19.785 09:41:47 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:15:19.785 00:15:19.785 00:15:19.785 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.785 http://cunit.sourceforge.net/ 00:15:19.785 00:15:19.785 00:15:19.785 Suite: app_suite 00:15:19.785 Test: notify ...passed 00:15:19.785 00:15:19.785 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.785 suites 1 1 n/a 0 0 00:15:19.785 tests 1 1 1 0 0 00:15:19.785 asserts 13 13 13 0 n/a 00:15:19.785 00:15:19.785 Elapsed time = 0.000 seconds 00:15:19.785 00:15:19.785 real 0m0.006s 00:15:19.785 user 0m0.005s 00:15:19.785 sys 0m0.001s 00:15:19.785 09:41:47 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.785 09:41:47 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:15:19.785 ************************************ 00:15:19.785 END TEST unittest_notify 00:15:19.785 ************************************ 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:19.785 09:41:47 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.785 09:41:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:19.785 ************************************ 00:15:19.785 START TEST unittest_nvme 00:15:19.785 ************************************ 00:15:19.785 09:41:47 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:15:19.785 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:15:19.785 00:15:19.785 00:15:19.785 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.785 http://cunit.sourceforge.net/ 00:15:19.785 00:15:19.785 00:15:19.785 Suite: nvme 00:15:19.785 Test: test_opc_data_transfer ...passed 00:15:19.785 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:15:19.785 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:15:19.785 Test: test_trid_parse_and_compare ...[2024-07-15 09:41:47.684005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:15:19.785 [2024-07-15 09:41:47.684253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:15:19.785 [2024-07-15 09:41:47.684268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:15:19.785 [2024-07-15 09:41:47.684278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:15:19.785 [2024-07-15 09:41:47.684287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:15:19.785 [2024-07-15 09:41:47.684296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:15:19.785 passed 00:15:19.785 Test: test_trid_trtype_str ...passed 00:15:19.785 Test: test_trid_adrfam_str ...passed 00:15:19.785 Test: test_nvme_ctrlr_probe ...passed 00:15:19.785 Test: test_spdk_nvme_probe ...passed 00:15:19.785 Test: test_spdk_nvme_connect ...[2024-07-15 09:41:47.684398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:15:19.786 [2024-07-15 09:41:47.684422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:15:19.786 [2024-07-15 09:41:47.684432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:15:19.786 [2024-07-15 09:41:47.684443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:15:19.786 [2024-07-15 09:41:47.684452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:15:19.786 [2024-07-15 09:41:47.684475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:15:19.786 passed 00:15:19.786 Test: test_nvme_ctrlr_probe_internal ...passed 00:15:19.786 Test: test_nvme_init_controllers ...passed 00:15:19.786 Test: test_nvme_driver_init ...[2024-07-15 09:41:47.684543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:15:19.786 [2024-07-15 09:41:47.684566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:15:19.786 [2024-07-15 09:41:47.684575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:19.786 [2024-07-15 09:41:47.684588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:15:19.786 [2024-07-15 09:41:47.684606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:15:19.786 [2024-07-15 09:41:47.684616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:15:19.786 passed 00:15:19.786 Test: test_spdk_nvme_detach ...passed 00:15:19.786 Test: test_nvme_completion_poll_cb ...passed 00:15:19.786 Test: test_nvme_user_copy_cmd_complete ...passed[2024-07-15 09:41:47.794695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:15:19.786 00:15:19.786 Test: test_nvme_allocate_request_null ...passed 00:15:19.786 Test: test_nvme_allocate_request ...passed 00:15:19.786 Test: test_nvme_free_request ...passed 00:15:19.786 Test: test_nvme_allocate_request_user_copy ...passed 00:15:19.786 Test: test_nvme_robust_mutex_init_shared ...passed 00:15:19.786 Test: test_nvme_request_check_timeout ...passed 00:15:19.786 Test: test_nvme_wait_for_completion ...passed 00:15:19.786 Test: test_spdk_nvme_parse_func ...passed 00:15:19.786 Test: test_spdk_nvme_detach_async ...passed 00:15:19.786 Test: test_nvme_parse_addr ...passed 00:15:19.786 00:15:19.786 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.786 suites 1 1 n/a 0 0 00:15:19.786 tests 25 25 25 0 0 00:15:19.786 asserts 326 326 326 0 n/a 00:15:19.786 00:15:19.786 Elapsed time = 0.000 seconds 00:15:19.786 [2024-07-15 09:41:47.795007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:15:19.786 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:15:19.786 00:15:19.786 00:15:19.786 CUnit - A unit testing framework for C - Version 2.1-3 00:15:19.786 http://cunit.sourceforge.net/ 00:15:19.786 00:15:19.786 00:15:19.786 Suite: nvme_ctrlr 00:15:19.786 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-15 09:41:47.802981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-15 09:41:47.804512] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-15 09:41:47.805695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-15 09:41:47.806908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-15 09:41:47.808101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 [2024-07-15 09:41:47.809247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 09:41:47.810397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 09:41:47.811550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:15:19.786 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-15 09:41:47.813890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 [2024-07-15 09:41:47.816160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 09:41:47.817320] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:15:19.786 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-15 09:41:47.819642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 [2024-07-15 09:41:47.820779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 09:41:47.823043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:15:19.786 Test: test_nvme_ctrlr_init_delay ...[2024-07-15 09:41:47.825377] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_alloc_io_qpair_rr_1 ...[2024-07-15 09:41:47.826562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:15:19.786 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:15:19.786 Test: test_alloc_io_qpair_wrr_1 ...passed[2024-07-15 09:41:47.826620] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:15:19.786 [2024-07-15 09:41:47.826639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:15:19.786 [2024-07-15 09:41:47.826652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:15:19.786 [2024-07-15 09:41:47.826664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:15:19.786 [2024-07-15 09:41:47.826735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 00:15:19.786 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-15 09:41:47.826778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:15:19.786 Test: test_nvme_ctrlr_fail ...passed 00:15:19.786 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:15:19.786 Test: test_nvme_ctrlr_set_supported_features ...passed 00:15:19.786 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-15 09:41:47.826799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:15:19.786 [2024-07-15 09:41:47.826829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:15:19.786 [2024-07-15 09:41:47.826843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:15:19.786 [2024-07-15 09:41:47.826855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:15:19.786 [2024-07-15 09:41:47.826869] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:15:19.786 [2024-07-15 09:41:47.826886] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:15:19.786 [2024-07-15 09:41:47.826912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:15:19.786 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-15 09:41:47.828089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:19.786 passed 00:15:19.786 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:15:19.786 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:15:19.786 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:15:19.786 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-15 09:41:47.875770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-15 09:41:47.882694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-15 09:41:47.883910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 [2024-07-15 09:41:47.883962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:15:20.071 passed 00:15:20.071 Test: test_alloc_io_qpair_fail ...[2024-07-15 09:41:47.885102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 [2024-07-15 09:41:47.885147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_add_remove_process ...passed 00:15:20.071 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:15:20.071 Test: test_nvme_ctrlr_set_state ...passed 00:15:20.071 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-15 09:41:47.885210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:15:20.071 [2024-07-15 09:41:47.885226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-15 09:41:47.891331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-15 09:41:47.904704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_reset ...[2024-07-15 09:41:47.905934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_aer_callback ...[2024-07-15 09:41:47.906058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-15 09:41:47.907232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:15:20.071 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:15:20.071 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-15 09:41:47.908542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:15:20.071 Test: test_nvme_ctrlr_ana_resize ...[2024-07-15 09:41:47.909757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:15:20.071 Test: test_nvme_transport_ctrlr_ready ...[2024-07-15 09:41:47.910951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:15:20.071 passed 00:15:20.071 Test: test_nvme_ctrlr_disable ...[2024-07-15 09:41:47.910984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:15:20.071 [2024-07-15 09:41:47.910998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:20.071 passed 00:15:20.071 00:15:20.071 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.071 suites 1 1 n/a 0 0 00:15:20.071 tests 44 44 44 0 0 00:15:20.071 asserts 10434 10434 10434 0 n/a 00:15:20.071 00:15:20.071 Elapsed time = 0.070 seconds 00:15:20.071 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:15:20.071 00:15:20.071 00:15:20.071 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.071 http://cunit.sourceforge.net/ 00:15:20.071 00:15:20.071 00:15:20.071 Suite: nvme_ctrlr_cmd 00:15:20.071 Test: test_get_log_pages ...passed 00:15:20.071 Test: test_set_feature_cmd ...passed 00:15:20.071 Test: test_set_feature_ns_cmd ...passed 00:15:20.071 Test: test_get_feature_cmd ...passed 00:15:20.071 Test: test_get_feature_ns_cmd ...passed 00:15:20.071 Test: test_abort_cmd ...passed 00:15:20.071 Test: test_set_host_id_cmds ...[2024-07-15 09:41:47.921772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:15:20.071 passed 00:15:20.071 Test: test_io_cmd_raw_no_payload_build ...passed 00:15:20.071 Test: test_io_raw_cmd ...passed 00:15:20.071 Test: test_io_raw_cmd_with_md ...passed 00:15:20.071 Test: test_namespace_attach ...passed 00:15:20.071 Test: test_namespace_detach ...passed 00:15:20.071 Test: test_namespace_create ...passed 00:15:20.071 Test: test_namespace_delete ...passed 00:15:20.071 Test: test_doorbell_buffer_config ...passed 00:15:20.072 Test: test_format_nvme ...passed 00:15:20.072 Test: test_fw_commit ...passed 00:15:20.072 Test: test_fw_image_download ...passed 00:15:20.072 Test: test_sanitize ...passed 00:15:20.072 Test: test_directive ...passed 00:15:20.072 Test: test_nvme_request_add_abort ...passed 00:15:20.072 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:15:20.072 Test: test_nvme_ctrlr_cmd_identify ...passed 00:15:20.072 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:15:20.072 00:15:20.072 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.072 suites 1 1 n/a 0 0 00:15:20.072 tests 24 24 24 0 0 00:15:20.072 asserts 198 198 198 0 n/a 00:15:20.072 00:15:20.072 Elapsed time = 0.000 seconds 00:15:20.072 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:15:20.072 00:15:20.072 00:15:20.072 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.072 http://cunit.sourceforge.net/ 00:15:20.072 00:15:20.072 00:15:20.072 Suite: nvme_ctrlr_cmd 00:15:20.072 Test: test_geometry_cmd ...passed 00:15:20.072 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:15:20.072 00:15:20.072 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.072 suites 1 1 n/a 0 0 00:15:20.072 tests 2 2 2 0 0 00:15:20.072 asserts 7 7 7 0 n/a 00:15:20.072 00:15:20.072 Elapsed time = 0.000 seconds 00:15:20.072 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:15:20.072 00:15:20.072 00:15:20.072 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.072 http://cunit.sourceforge.net/ 00:15:20.072 00:15:20.072 00:15:20.072 Suite: nvme 00:15:20.072 Test: test_nvme_ns_construct ...passed 00:15:20.072 Test: test_nvme_ns_uuid ...passed 00:15:20.072 Test: test_nvme_ns_csi ...passed 00:15:20.072 Test: test_nvme_ns_data ...passed 00:15:20.072 Test: test_nvme_ns_set_identify_data ...passed 00:15:20.072 Test: test_spdk_nvme_ns_get_values ...passed 00:15:20.072 Test: test_spdk_nvme_ns_is_active ...passed 00:15:20.072 Test: spdk_nvme_ns_supports ...passed 00:15:20.072 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:15:20.072 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:15:20.072 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:15:20.072 Test: test_nvme_ns_find_id_desc ...passed 00:15:20.072 00:15:20.072 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.072 suites 1 1 n/a 0 0 00:15:20.072 tests 12 12 12 0 0 00:15:20.072 asserts 95 95 95 0 n/a 00:15:20.072 00:15:20.072 Elapsed time = 0.000 seconds 00:15:20.072 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:15:20.072 00:15:20.072 00:15:20.072 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.072 http://cunit.sourceforge.net/ 00:15:20.072 00:15:20.072 00:15:20.072 Suite: nvme_ns_cmd 00:15:20.072 Test: split_test ...passed 00:15:20.072 Test: split_test2 ...passed 00:15:20.072 Test: split_test3 ...passed 00:15:20.072 Test: split_test4 ...passed 00:15:20.072 Test: test_nvme_ns_cmd_flush ...passed 00:15:20.072 Test: test_nvme_ns_cmd_dataset_management ...passed 00:15:20.072 Test: test_nvme_ns_cmd_copy ...passed 00:15:20.072 Test: test_io_flags ...[2024-07-15 09:41:47.941347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:15:20.072 passed 00:15:20.072 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:15:20.072 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:15:20.072 Test: test_nvme_ns_cmd_reservation_register ...passed 00:15:20.072 Test: test_nvme_ns_cmd_reservation_release ...passed 00:15:20.072 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:15:20.072 Test: test_nvme_ns_cmd_reservation_report ...passed 00:15:20.072 Test: test_cmd_child_request ...passed 00:15:20.072 Test: test_nvme_ns_cmd_readv ...passed 00:15:20.072 Test: test_nvme_ns_cmd_read_with_md ...passed 00:15:20.072 Test: test_nvme_ns_cmd_writev ...passed 00:15:20.072 Test: test_nvme_ns_cmd_write_with_md ...passed 00:15:20.072 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:15:20.072 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:15:20.072 Test: test_nvme_ns_cmd_comparev ...passed 00:15:20.072 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:15:20.072 Test: test_nvme_ns_cmd_compare_with_md ...[2024-07-15 09:41:47.941670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:15:20.072 passed 00:15:20.072 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:15:20.072 Test: test_nvme_ns_cmd_setup_request ...passed 00:15:20.072 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:15:20.072 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:15:20.072 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:15:20.072 Test: test_nvme_ns_cmd_verify ...passed 00:15:20.072 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:15:20.072 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed[2024-07-15 09:41:47.941778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:15:20.072 [2024-07-15 09:41:47.941797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:15:20.072 00:15:20.072 00:15:20.072 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.072 suites 1 1 n/a 0 0 00:15:20.072 tests 32 32 32 0 0 00:15:20.072 asserts 550 550 550 0 n/a 00:15:20.072 00:15:20.072 Elapsed time = 0.000 seconds 00:15:20.072 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:15:20.072 00:15:20.072 00:15:20.072 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.072 http://cunit.sourceforge.net/ 00:15:20.072 00:15:20.072 00:15:20.072 Suite: nvme_ns_cmd 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:15:20.072 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:15:20.072 00:15:20.072 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.072 suites 1 1 n/a 0 0 00:15:20.072 tests 12 12 12 0 0 00:15:20.072 asserts 123 123 123 0 n/a 00:15:20.072 00:15:20.072 Elapsed time = 0.000 seconds 00:15:20.072 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:15:20.072 00:15:20.072 00:15:20.072 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.072 http://cunit.sourceforge.net/ 00:15:20.072 00:15:20.072 00:15:20.072 Suite: nvme_qpair 00:15:20.072 Test: test3 ...passed 00:15:20.072 Test: test_ctrlr_failed ...passed 00:15:20.072 Test: struct_packing ...passed 00:15:20.072 Test: test_nvme_qpair_process_completions ...[2024-07-15 09:41:47.955721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.072 [2024-07-15 09:41:47.956067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.072 [2024-07-15 09:41:47.956158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:15:20.072 [2024-07-15 09:41:47.956177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:15:20.072 passed 00:15:20.072 Test: test_nvme_completion_is_retry ...passed 00:15:20.072 Test: test_get_status_string ...passed 00:15:20.072 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:15:20.072 Test: test_nvme_qpair_submit_request ...passed 00:15:20.072 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:15:20.072 Test: test_nvme_qpair_manual_complete_request ...passed 00:15:20.072 Test: test_nvme_qpair_init_deinit ...passed 00:15:20.072 Test: test_nvme_get_sgl_print_info ...passed 00:15:20.072 00:15:20.072 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.072 suites 1 1 n/a 0 0 00:15:20.072 tests 12 12 12 0 0 00:15:20.072 asserts 154 154 154 0 n/a 00:15:20.072 00:15:20.072 Elapsed time = 0.000 seconds 00:15:20.072 [2024-07-15 09:41:47.956248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.072 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:15:20.072 00:15:20.072 00:15:20.072 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.072 http://cunit.sourceforge.net/ 00:15:20.072 00:15:20.072 00:15:20.072 Suite: nvme_pcie 00:15:20.072 Test: test_prp_list_append ...passed 00:15:20.072 Test: test_nvme_pcie_hotplug_monitor ...passed 00:15:20.072 Test: test_shadow_doorbell_update ...passed 00:15:20.072 Test: test_build_contig_hw_sgl_request ...passed 00:15:20.072 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:15:20.072 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:15:20.072 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:15:20.072 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:15:20.072 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:15:20.072 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:15:20.072 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-15 09:41:47.963014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:15:20.072 [2024-07-15 09:41:47.963262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:15:20.072 [2024-07-15 09:41:47.963278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:15:20.073 [2024-07-15 09:41:47.963330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:15:20.073 [2024-07-15 09:41:47.963356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:15:20.073 [2024-07-15 09:41:47.963446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:15:20.073 [2024-07-15 09:41:47.963478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:15:20.073 passed 00:15:20.073 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:15:20.073 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-15 09:41:47.963495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:15:20.073 passed 00:15:20.073 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:15:20.073 00:15:20.073 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.073 suites 1 1 n/a 0 0 00:15:20.073 tests 14 14 14 0 0 00:15:20.073 asserts 235 235 235 0 n/a 00:15:20.073 00:15:20.073 Elapsed time = 0.000 seconds 00:15:20.073 [2024-07-15 09:41:47.963513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:15:20.073 [2024-07-15 09:41:47.963527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:15:20.073 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:15:20.073 00:15:20.073 00:15:20.073 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.073 http://cunit.sourceforge.net/ 00:15:20.073 00:15:20.073 00:15:20.073 Suite: nvme_ns_cmd 00:15:20.073 Test: nvme_poll_group_create_test ...passed 00:15:20.073 Test: nvme_poll_group_add_remove_test ...passed 00:15:20.073 Test: nvme_poll_group_process_completions ...passed 00:15:20.073 Test: nvme_poll_group_destroy_test ...passed 00:15:20.073 Test: nvme_poll_group_get_free_stats ...passed 00:15:20.073 00:15:20.073 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.073 suites 1 1 n/a 0 0 00:15:20.073 tests 5 5 5 0 0 00:15:20.073 asserts 75 75 75 0 n/a 00:15:20.073 00:15:20.073 Elapsed time = 0.000 seconds 00:15:20.073 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:15:20.073 00:15:20.073 00:15:20.073 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.073 http://cunit.sourceforge.net/ 00:15:20.073 00:15:20.073 00:15:20.073 Suite: nvme_quirks 00:15:20.073 Test: test_nvme_quirks_striping ...passed 00:15:20.073 00:15:20.073 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.073 suites 1 1 n/a 0 0 00:15:20.073 tests 1 1 1 0 0 00:15:20.073 asserts 5 5 5 0 n/a 00:15:20.073 00:15:20.073 Elapsed time = 0.000 seconds 00:15:20.073 09:41:47 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:15:20.073 00:15:20.073 00:15:20.073 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.073 http://cunit.sourceforge.net/ 00:15:20.073 00:15:20.073 00:15:20.073 Suite: nvme_tcp 00:15:20.073 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:15:20.073 Test: test_nvme_tcp_build_iovs ...passed 00:15:20.073 Test: test_nvme_tcp_build_sgl_request ...[2024-07-15 09:41:47.981663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820f9a468, and the iovcnt=16, remaining_size=28672 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:15:20.073 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:15:20.073 Test: test_nvme_tcp_req_complete_safe ...passed 00:15:20.073 Test: test_nvme_tcp_req_get ...passed 00:15:20.073 Test: test_nvme_tcp_req_init ...passed 00:15:20.073 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:15:20.073 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:15:20.073 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-15 09:41:47.982015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(6) to be set 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_alloc_reqs ...passed 00:15:20.073 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:15:20.073 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-15 09:41:47.982062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820f9b7a8 00:15:20.073 [2024-07-15 09:41:47.982096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:15:20.073 [2024-07-15 09:41:47.982108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:15:20.073 [2024-07-15 09:41:47.982132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-15 09:41:47.982144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:15:20.073 [2024-07-15 09:41:47.982156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982223] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:47.982261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:15:20.073 [2024-07-15 09:41:47.982273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:15:20.073 [2024-07-15 09:41:48.021271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:15:20.073 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-15 09:41:48.021387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820f9bbe0): PDU Sequence Error 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_icresp_handle ...passed 00:15:20.073 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:15:20.073 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:15:20.073 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-07-15 09:41:48.021424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:15:20.073 [2024-07-15 09:41:48.021442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:15:20.073 [2024-07-15 09:41:48.021455] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:48.021470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:15:20.073 [2024-07-15 09:41:48.021482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:48.021497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f9c018 is same with the state(0) to be set 00:15:20.073 [2024-07-15 09:41:48.021515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820f9bbe0): PDU Sequence Error 00:15:20.073 [2024-07-15 09:41:48.021545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820f9c018 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-15 09:41:48.021606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820f99d78, errno=0, rc=0 00:15:20.073 [2024-07-15 09:41:48.021624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f99d78 is same with the state(5) to be set 00:15:20.073 [2024-07-15 09:41:48.021637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820f99d78 is same with the state(5) to be set 00:15:20.073 passed 00:15:20.073 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-15 09:41:48.021742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820f99d78 (0): No error: 0 00:15:20.073 [2024-07-15 09:41:48.021757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820f99d78 (0): No error: 0 00:15:20.074 [2024-07-15 09:41:48.104790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:15:20.074 [2024-07-15 09:41:48.104899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:15:20.074 passed 00:15:20.074 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:15:20.074 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:15:20.074 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-15 09:41:48.104945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:20.074 [2024-07-15 09:41:48.104952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:20.074 [2024-07-15 09:41:48.104997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:15:20.074 passed 00:15:20.074 Test: test_nvme_tcp_qpair_submit_request ...passed 00:15:20.074 00:15:20.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.074 suites 1 1 n/a 0 0 00:15:20.074 tests 27 27 27 0 0 00:15:20.074 asserts 624 624 624 0 n/a 00:15:20.074 00:15:20.074 Elapsed time = 0.078 seconds 00:15:20.074 [2024-07-15 09:41:48.105004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:20.074 [2024-07-15 09:41:48.105015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:15:20.074 [2024-07-15 09:41:48.105022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:20.074 [2024-07-15 09:41:48.105034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc735c6b000 with addr=192.168.1.78, port=23 00:15:20.074 [2024-07-15 09:41:48.105040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:20.074 [2024-07-15 09:41:48.105056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0xdc735c39180, and the iovcnt=1, remaining_size=1024 00:15:20.074 [2024-07-15 09:41:48.105062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:15:20.074 09:41:48 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:15:20.074 00:15:20.074 00:15:20.074 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.074 http://cunit.sourceforge.net/ 00:15:20.074 00:15:20.074 00:15:20.074 Suite: nvme_transport 00:15:20.074 Test: test_nvme_get_transport ...passed 00:15:20.074 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:15:20.074 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:15:20.074 Test: test_nvme_transport_poll_group_add_remove ...passed 00:15:20.074 Test: test_ctrlr_get_memory_domains ...passed 00:15:20.074 00:15:20.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.074 suites 1 1 n/a 0 0 00:15:20.074 tests 5 5 5 0 0 00:15:20.074 asserts 28 28 28 0 n/a 00:15:20.074 00:15:20.074 Elapsed time = 0.000 seconds 00:15:20.074 09:41:48 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:15:20.074 00:15:20.074 00:15:20.074 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.074 http://cunit.sourceforge.net/ 00:15:20.074 00:15:20.074 00:15:20.074 Suite: nvme_io_msg 00:15:20.074 Test: test_nvme_io_msg_send ...passed 00:15:20.074 Test: test_nvme_io_msg_process ...passed 00:15:20.074 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:15:20.074 00:15:20.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.074 suites 1 1 n/a 0 0 00:15:20.074 tests 3 3 3 0 0 00:15:20.074 asserts 56 56 56 0 n/a 00:15:20.074 00:15:20.074 Elapsed time = 0.000 seconds 00:15:20.074 09:41:48 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:15:20.074 00:15:20.074 00:15:20.074 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.074 http://cunit.sourceforge.net/ 00:15:20.074 00:15:20.074 00:15:20.074 Suite: nvme_pcie_common 00:15:20.074 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-15 09:41:48.127206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:15:20.074 passed 00:15:20.074 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:15:20.074 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:15:20.074 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:15:20.074 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-15 09:41:48.127515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:15:20.074 [2024-07-15 09:41:48.127537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:15:20.074 [2024-07-15 09:41:48.127550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:15:20.074 passed 00:15:20.074 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:15:20.074 00:15:20.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.074 suites 1 1 n/a 0 0 00:15:20.074 tests 6 6 6 0 0 00:15:20.074 asserts 148 148 148 0 n/a 00:15:20.074 00:15:20.074 Elapsed time = 0.000 seconds[2024-07-15 09:41:48.127683] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:20.074 [2024-07-15 09:41:48.127696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:20.074 00:15:20.074 09:41:48 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:15:20.074 00:15:20.074 00:15:20.074 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.074 http://cunit.sourceforge.net/ 00:15:20.074 00:15:20.074 00:15:20.074 Suite: nvme_fabric 00:15:20.074 Test: test_nvme_fabric_prop_set_cmd ...passed 00:15:20.074 Test: test_nvme_fabric_prop_get_cmd ...passed 00:15:20.074 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:15:20.074 Test: test_nvme_fabric_discover_probe ...passed 00:15:20.074 Test: test_nvme_fabric_qpair_connect ...passed 00:15:20.074 00:15:20.074 [2024-07-15 09:41:48.134206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:15:20.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.074 suites 1 1 n/a 0 0 00:15:20.074 tests 5 5 5 0 0 00:15:20.074 asserts 60 60 60 0 n/a 00:15:20.074 00:15:20.074 Elapsed time = 0.000 seconds 00:15:20.074 09:41:48 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:15:20.074 00:15:20.074 00:15:20.074 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.074 http://cunit.sourceforge.net/ 00:15:20.074 00:15:20.074 00:15:20.074 Suite: nvme_opal 00:15:20.074 Test: test_opal_nvme_security_recv_send_done ...passed 00:15:20.074 Test: test_opal_add_short_atom_header ...passed 00:15:20.074 00:15:20.074 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.074 suites 1 1 n/a 0 0 00:15:20.074 tests 2 2 2 0 0 00:15:20.074 asserts 22 22 22 0 n/a 00:15:20.074 00:15:20.074 Elapsed time = 0.000 seconds 00:15:20.074 [2024-07-15 09:41:48.140431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:15:20.074 00:15:20.074 real 0m0.463s 00:15:20.074 user 0m0.106s 00:15:20.074 sys 0m0.168s 00:15:20.074 09:41:48 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.074 09:41:48 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.074 ************************************ 00:15:20.074 END TEST unittest_nvme 00:15:20.074 ************************************ 00:15:20.335 09:41:48 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:20.335 09:41:48 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:15:20.335 09:41:48 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:20.335 09:41:48 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.335 09:41:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:20.336 ************************************ 00:15:20.336 START TEST unittest_log 00:15:20.336 ************************************ 00:15:20.336 09:41:48 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:15:20.336 00:15:20.336 00:15:20.336 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.336 http://cunit.sourceforge.net/ 00:15:20.336 00:15:20.336 00:15:20.336 Suite: log 00:15:20.336 Test: log_test ...[2024-07-15 09:41:48.189976] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:15:20.336 passed 00:15:20.336 Test: deprecation ...[2024-07-15 09:41:48.190262] log_ut.c: 57:log_test: *DEBUG*: log test 00:15:20.336 log dump test: 00:15:20.336 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:15:20.336 spdk dump test: 00:15:20.336 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:15:20.336 spdk dump test: 00:15:20.336 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:15:20.336 00000010 65 20 63 68 61 72 73 e chars 00:15:21.270 passed 00:15:21.270 00:15:21.270 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.270 suites 1 1 n/a 0 0 00:15:21.270 tests 2 2 2 0 0 00:15:21.270 asserts 73 73 73 0 n/a 00:15:21.270 00:15:21.270 Elapsed time = 0.000 seconds 00:15:21.270 00:15:21.270 real 0m1.016s 00:15:21.270 user 0m0.003s 00:15:21.270 sys 0m0.005s 00:15:21.270 09:41:49 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.270 09:41:49 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:15:21.270 ************************************ 00:15:21.270 END TEST unittest_log 00:15:21.270 ************************************ 00:15:21.270 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.270 09:41:49 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:15:21.270 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.270 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.270 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.270 ************************************ 00:15:21.270 START TEST unittest_lvol 00:15:21.270 ************************************ 00:15:21.270 09:41:49 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:15:21.270 00:15:21.270 00:15:21.270 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.270 http://cunit.sourceforge.net/ 00:15:21.270 00:15:21.270 00:15:21.270 Suite: lvol 00:15:21.270 Test: lvs_init_unload_success ...passed 00:15:21.270 Test: lvs_init_destroy_success ...[2024-07-15 09:41:49.254803] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:15:21.270 [2024-07-15 09:41:49.255164] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:15:21.270 passed 00:15:21.270 Test: lvs_init_opts_success ...passed 00:15:21.270 Test: lvs_unload_lvs_is_null_fail ...passed 00:15:21.270 Test: lvs_names ...[2024-07-15 09:41:49.255218] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:15:21.270 [2024-07-15 09:41:49.255238] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:15:21.270 [2024-07-15 09:41:49.255279] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:15:21.270 passed 00:15:21.270 Test: lvol_create_destroy_success ...passed 00:15:21.270 Test: lvol_create_fail ...passed 00:15:21.270 Test: lvol_destroy_fail ...passed 00:15:21.270 Test: lvol_close ...passed 00:15:21.270 Test: lvol_resize ...passed 00:15:21.270 Test: lvol_set_read_only ...passed[2024-07-15 09:41:49.255298] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:15:21.270 [2024-07-15 09:41:49.255358] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:15:21.270 [2024-07-15 09:41:49.255379] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:15:21.270 [2024-07-15 09:41:49.255416] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:15:21.270 [2024-07-15 09:41:49.255447] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:15:21.270 [2024-07-15 09:41:49.255463] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:15:21.270 00:15:21.270 Test: test_lvs_load ...passed 00:15:21.270 Test: lvols_load ...passed 00:15:21.270 Test: lvol_open ...[2024-07-15 09:41:49.255563] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:15:21.270 [2024-07-15 09:41:49.255579] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:15:21.270 [2024-07-15 09:41:49.255616] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:15:21.271 [2024-07-15 09:41:49.255656] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:15:21.271 passed 00:15:21.271 Test: lvol_snapshot ...passed 00:15:21.271 Test: lvol_snapshot_fail ...passed 00:15:21.271 Test: lvol_clone ...passed 00:15:21.271 Test: lvol_clone_fail ...passed 00:15:21.271 Test: lvol_iter_clones ...passed 00:15:21.271 Test: lvol_refcnt ...passed[2024-07-15 09:41:49.255823] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:15:21.271 [2024-07-15 09:41:49.255905] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:15:21.271 [2024-07-15 09:41:49.255966] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 754d3b04-428e-11ef-a0af-c98d8ee52a94 because it is still open 00:15:21.271 00:15:21.271 Test: lvol_names ...passed 00:15:21.271 Test: lvol_create_thin_provisioned ...passed[2024-07-15 09:41:49.255994] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:15:21.271 [2024-07-15 09:41:49.256017] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:15:21.271 [2024-07-15 09:41:49.256041] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:15:21.271 00:15:21.271 Test: lvol_rename ...[2024-07-15 09:41:49.256265] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:15:21.271 [2024-07-15 09:41:49.256295] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:15:21.271 passed 00:15:21.271 Test: lvs_rename ...passed 00:15:21.271 Test: lvol_inflate ...passed 00:15:21.271 Test: lvol_decouple_parent ...[2024-07-15 09:41:49.256341] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:15:21.271 [2024-07-15 09:41:49.256377] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:15:21.271 passed 00:15:21.271 Test: lvol_get_xattr ...passed 00:15:21.271 Test: lvol_esnap_reload ...passed 00:15:21.271 Test: lvol_esnap_create_bad_args ...[2024-07-15 09:41:49.256411] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:15:21.271 [2024-07-15 09:41:49.256470] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:15:21.271 [2024-07-15 09:41:49.256489] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:15:21.271 [2024-07-15 09:41:49.256505] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:15:21.271 [2024-07-15 09:41:49.256524] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:15:21.271 passed 00:15:21.271 Test: lvol_esnap_create_delete ...passed 00:15:21.271 Test: lvol_esnap_load_esnaps ...passed 00:15:21.271 Test: lvol_esnap_missing ...passed 00:15:21.271 Test: lvol_esnap_hotplug ... 00:15:21.271 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:15:21.271 [2024-07-15 09:41:49.256557] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:15:21.271 [2024-07-15 09:41:49.256598] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:15:21.271 [2024-07-15 09:41:49.256627] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:15:21.271 [2024-07-15 09:41:49.256641] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:15:21.271 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:15:21.271 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:15:21.271 [2024-07-15 09:41:49.256732] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 754d58cc-428e-11ef-a0af-c98d8ee52a94: failed to create esnap bs_dev: error -12 00:15:21.271 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:15:21.271 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:15:21.271 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:15:21.271 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:15:21.271 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:15:21.271 [2024-07-15 09:41:49.257108] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 754d6752-428e-11ef-a0af-c98d8ee52a94: failed to create esnap bs_dev: error -12 00:15:21.271 [2024-07-15 09:41:49.257150] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 754d6935-428e-11ef-a0af-c98d8ee52a94: failed to create esnap bs_dev: error -12 00:15:21.271 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:15:21.271 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:15:21.271 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:15:21.271 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:15:21.271 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:15:21.271 passed 00:15:21.271 Test: lvol_get_by ...passed 00:15:21.271 Test: lvol_shallow_copy ...passed 00:15:21.271 Test: lvol_set_parent ...[2024-07-15 09:41:49.257428] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:15:21.271 [2024-07-15 09:41:49.257450] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 754d7410-428e-11ef-a0af-c98d8ee52a94 shallow copy, ext_dev must not be NULL 00:15:21.271 passed 00:15:21.271 Test: lvol_set_external_parent ...passed 00:15:21.271 00:15:21.271 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.271 suites 1 1 n/a 0 0 00:15:21.271 tests 37 37 37 0 0 00:15:21.271 asserts 1505 1505 1505 0 n/a 00:15:21.271 00:15:21.271 Elapsed time = 0.008 seconds 00:15:21.271 [2024-07-15 09:41:49.257490] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:15:21.271 [2024-07-15 09:41:49.257506] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:15:21.271 [2024-07-15 09:41:49.257533] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:15:21.271 [2024-07-15 09:41:49.257545] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:15:21.271 [2024-07-15 09:41:49.257557] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:15:21.271 00:15:21.271 real 0m0.013s 00:15:21.271 user 0m0.009s 00:15:21.271 sys 0m0.008s 00:15:21.271 09:41:49 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.271 09:41:49 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:21.271 ************************************ 00:15:21.271 END TEST unittest_lvol 00:15:21.271 ************************************ 00:15:21.271 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.271 09:41:49 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:21.271 09:41:49 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:15:21.271 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.271 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.271 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.271 ************************************ 00:15:21.271 START TEST unittest_nvme_rdma 00:15:21.271 ************************************ 00:15:21.271 09:41:49 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:15:21.271 00:15:21.271 00:15:21.271 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.271 http://cunit.sourceforge.net/ 00:15:21.271 00:15:21.271 00:15:21.271 Suite: nvme_rdma 00:15:21.271 Test: test_nvme_rdma_build_sgl_request ...[2024-07-15 09:41:49.312851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:15:21.271 passed 00:15:21.271 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:15:21.271 Test: test_nvme_rdma_build_contig_request ...passed 00:15:21.271 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:15:21.271 Test: test_nvme_rdma_create_reqs ...passed 00:15:21.271 Test: test_nvme_rdma_create_rsps ...passed 00:15:21.271 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:15:21.271 Test: test_nvme_rdma_poller_create ...passed 00:15:21.271 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-15 09:41:49.313078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:15:21.271 [2024-07-15 09:41:49.313102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:15:21.271 [2024-07-15 09:41:49.313122] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:15:21.271 [2024-07-15 09:41:49.313150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:15:21.271 [2024-07-15 09:41:49.313185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:15:21.271 [2024-07-15 09:41:49.313215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:15:21.271 [2024-07-15 09:41:49.313226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:15:21.271 [2024-07-15 09:41:49.313253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:15:21.271 passed 00:15:21.271 Test: test_nvme_rdma_ctrlr_construct ...passed 00:15:21.271 Test: test_nvme_rdma_req_put_and_get ...passed 00:15:21.271 Test: test_nvme_rdma_req_init ...passed 00:15:21.271 Test: test_nvme_rdma_validate_cm_event ...passed 00:15:21.271 Test: test_nvme_rdma_qpair_init ...passed 00:15:21.271 Test: test_nvme_rdma_qpair_submit_request ...passed 00:15:21.271 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:15:21.271 Test: test_rdma_get_memory_translation ...passed 00:15:21.271 Test: test_get_rdma_qpair_from_wc ...passed 00:15:21.271 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:15:21.271 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:15:21.271 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-15 09:41:49.313324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:15:21.271 [2024-07-15 09:41:49.313337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:15:21.271 [2024-07-15 09:41:49.313368] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:15:21.271 [2024-07-15 09:41:49.313378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:15:21.271 [2024-07-15 09:41:49.313403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:21.271 [2024-07-15 09:41:49.313412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:21.271 [2024-07-15 09:41:49.313437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:15:21.272 [2024-07-15 09:41:49.313447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:15:21.272 [2024-07-15 09:41:49.313457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820d327a8 on poll group 0x2ac469872000 00:15:21.272 [2024-07-15 09:41:49.313467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:15:21.272 [2024-07-15 09:41:49.313477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:15:21.272 [2024-07-15 09:41:49.313486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820d327a8 on poll group 0x2ac469872000 00:15:21.272 passed 00:15:21.272 00:15:21.272 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.272 suites 1 1 n/a 0 0 00:15:21.272 tests 21 21 21 0 0 00:15:21.272 asserts 397 397 397 0 n/a 00:15:21.272 00:15:21.272 Elapsed time = 0.000 seconds 00:15:21.272 [2024-07-15 09:41:49.313549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:15:21.272 00:15:21.272 real 0m0.007s 00:15:21.272 user 0m0.006s 00:15:21.272 sys 0m0.008s 00:15:21.272 09:41:49 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.272 09:41:49 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 ************************************ 00:15:21.272 END TEST unittest_nvme_rdma 00:15:21.272 ************************************ 00:15:21.272 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.272 09:41:49 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:15:21.272 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.272 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.272 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 ************************************ 00:15:21.272 START TEST unittest_nvmf_transport 00:15:21.272 ************************************ 00:15:21.272 09:41:49 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:15:21.533 00:15:21.533 00:15:21.533 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.533 http://cunit.sourceforge.net/ 00:15:21.533 00:15:21.533 00:15:21.533 Suite: nvmf 00:15:21.533 Test: test_spdk_nvmf_transport_create ...[2024-07-15 09:41:49.370108] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:15:21.533 [2024-07-15 09:41:49.370471] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:15:21.533 [2024-07-15 09:41:49.370507] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:15:21.533 [2024-07-15 09:41:49.370580] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:15:21.533 passed 00:15:21.533 Test: test_nvmf_transport_poll_group_create ...passed 00:15:21.533 Test: test_spdk_nvmf_transport_opts_init ...passed 00:15:21.533 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:15:21.533 00:15:21.533 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.533 suites 1 1 n/a 0 0 00:15:21.533 tests 4 4 4 0 0 00:15:21.533 asserts 49 49 49 0 n/a 00:15:21.533 00:15:21.533 Elapsed time = 0.000 seconds 00:15:21.533 [2024-07-15 09:41:49.370645] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:15:21.533 [2024-07-15 09:41:49.370675] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:15:21.533 [2024-07-15 09:41:49.370705] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:15:21.533 00:15:21.533 real 0m0.013s 00:15:21.533 user 0m0.001s 00:15:21.533 sys 0m0.016s 00:15:21.533 09:41:49 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.533 ************************************ 00:15:21.533 END TEST unittest_nvmf_transport 00:15:21.533 ************************************ 00:15:21.533 09:41:49 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.533 09:41:49 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.533 ************************************ 00:15:21.533 START TEST unittest_rdma 00:15:21.533 ************************************ 00:15:21.533 09:41:49 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:15:21.533 00:15:21.533 00:15:21.533 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.533 http://cunit.sourceforge.net/ 00:15:21.533 00:15:21.533 00:15:21.533 Suite: rdma_common 00:15:21.533 Test: test_spdk_rdma_pd ...[2024-07-15 09:41:49.421725] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:15:21.533 [2024-07-15 09:41:49.422259] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:15:21.533 passed 00:15:21.533 00:15:21.533 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.533 suites 1 1 n/a 0 0 00:15:21.533 tests 1 1 1 0 0 00:15:21.533 asserts 31 31 31 0 n/a 00:15:21.533 00:15:21.533 Elapsed time = 0.000 seconds 00:15:21.533 00:15:21.533 real 0m0.006s 00:15:21.533 user 0m0.000s 00:15:21.533 sys 0m0.008s 00:15:21.533 09:41:49 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.533 09:41:49 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:21.533 ************************************ 00:15:21.533 END TEST unittest_rdma 00:15:21.533 ************************************ 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.533 09:41:49 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:21.533 09:41:49 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.533 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.533 ************************************ 00:15:21.533 START TEST unittest_nvmf 00:15:21.533 ************************************ 00:15:21.534 09:41:49 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:15:21.534 09:41:49 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:15:21.534 00:15:21.534 00:15:21.534 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.534 http://cunit.sourceforge.net/ 00:15:21.534 00:15:21.534 00:15:21.534 Suite: nvmf 00:15:21.534 Test: test_get_log_page ...[2024-07-15 09:41:49.473427] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:15:21.534 passed 00:15:21.534 Test: test_process_fabrics_cmd ...passed 00:15:21.534 Test: test_connect ...[2024-07-15 09:41:49.473771] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:15:21.534 [2024-07-15 09:41:49.474134] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:15:21.534 [2024-07-15 09:41:49.474148] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:15:21.534 [2024-07-15 09:41:49.474157] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:15:21.534 [2024-07-15 09:41:49.474295] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:15:21.534 [2024-07-15 09:41:49.474313] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:15:21.534 [2024-07-15 09:41:49.474454] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:15:21.534 [2024-07-15 09:41:49.474606] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:15:21.534 [2024-07-15 09:41:49.474762] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:15:21.534 [2024-07-15 09:41:49.474787] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:15:21.534 [2024-07-15 09:41:49.474814] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:15:21.534 [2024-07-15 09:41:49.474841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:15:21.534 [2024-07-15 09:41:49.474862] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:15:21.534 [2024-07-15 09:41:49.474883] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:15:21.534 [2024-07-15 09:41:49.474904] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:15:21.534 [2024-07-15 09:41:49.474929] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:15:21.534 [2024-07-15 09:41:49.474959] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:15:21.534 [2024-07-15 09:41:49.474980] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:15:21.534 passed 00:15:21.534 Test: test_get_ns_id_desc_list ...passed 00:15:21.534 Test: test_identify_ns ...[2024-07-15 09:41:49.475038] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:21.534 [2024-07-15 09:41:49.475112] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:15:21.534 [2024-07-15 09:41:49.475192] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:21.534 passed 00:15:21.534 Test: test_identify_ns_iocs_specific ...[2024-07-15 09:41:49.475237] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:21.534 [2024-07-15 09:41:49.475309] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:21.534 passed 00:15:21.534 Test: test_reservation_write_exclusive ...passed 00:15:21.534 Test: test_reservation_exclusive_access ...passed 00:15:21.534 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:15:21.534 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:15:21.534 Test: test_reservation_notification_log_page ...passed 00:15:21.534 Test: test_get_dif_ctx ...passed 00:15:21.534 Test: test_set_get_features ...[2024-07-15 09:41:49.475470] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:15:21.534 [2024-07-15 09:41:49.475492] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:15:21.534 [2024-07-15 09:41:49.475507] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:15:21.534 [2024-07-15 09:41:49.475518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:15:21.534 passed 00:15:21.534 Test: test_identify_ctrlr ...passed 00:15:21.534 Test: test_identify_ctrlr_iocs_specific ...passed 00:15:21.534 Test: test_custom_admin_cmd ...passed 00:15:21.534 Test: test_fused_compare_and_write ...[2024-07-15 09:41:49.475651] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:15:21.534 [2024-07-15 09:41:49.475673] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:15:21.534 passed 00:15:21.534 Test: test_multi_async_event_reqs ...passed[2024-07-15 09:41:49.475692] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:15:21.534 00:15:21.534 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:15:21.534 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:15:21.534 Test: test_multi_async_events ...passed 00:15:21.534 Test: test_rae ...passed 00:15:21.534 Test: test_nvmf_ctrlr_create_destruct ...passed 00:15:21.534 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:15:21.534 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-15 09:41:49.475834] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:15:21.534 passed 00:15:21.534 Test: test_zcopy_read ...passed 00:15:21.534 Test: test_zcopy_write ...[2024-07-15 09:41:49.475855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:15:21.534 passed 00:15:21.534 Test: test_nvmf_property_set ...passed 00:15:21.534 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-15 09:41:49.475888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:15:21.534 passed 00:15:21.534 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-15 09:41:49.475906] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:15:21.534 [2024-07-15 09:41:49.475924] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:15:21.534 [2024-07-15 09:41:49.475937] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:15:21.534 [2024-07-15 09:41:49.475950] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:15:21.534 passed 00:15:21.534 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:15:21.534 Test: test_nvmf_check_qpair_active ...[2024-07-15 09:41:49.475991] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:15:21.534 [2024-07-15 09:41:49.476012] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4745:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:15:21.534 [2024-07-15 09:41:49.476031] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:15:21.534 [2024-07-15 09:41:49.476049] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:15:21.534 [2024-07-15 09:41:49.476068] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:15:21.534 passed 00:15:21.534 00:15:21.534 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.534 suites 1 1 n/a 0 0 00:15:21.534 tests 32 32 32 0 0 00:15:21.534 asserts 977 977 977 0 n/a 00:15:21.534 00:15:21.534 Elapsed time = 0.000 seconds 00:15:21.534 09:41:49 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:15:21.534 00:15:21.534 00:15:21.534 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.534 http://cunit.sourceforge.net/ 00:15:21.534 00:15:21.534 00:15:21.534 Suite: nvmf 00:15:21.534 Test: test_get_rw_params ...passed 00:15:21.534 Test: test_get_rw_ext_params ...passed 00:15:21.534 Test: test_lba_in_range ...passed 00:15:21.534 Test: test_get_dif_ctx ...passed 00:15:21.534 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:15:21.534 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:15:21.534 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:15:21.534 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:15:21.534 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:15:21.534 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed[2024-07-15 09:41:49.484436] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:15:21.534 [2024-07-15 09:41:49.484690] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:15:21.534 [2024-07-15 09:41:49.484709] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:15:21.534 [2024-07-15 09:41:49.484729] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:15:21.534 [2024-07-15 09:41:49.484743] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:15:21.534 [2024-07-15 09:41:49.484758] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:15:21.534 [2024-07-15 09:41:49.484774] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:15:21.534 [2024-07-15 09:41:49.484794] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:15:21.534 [2024-07-15 09:41:49.484811] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:15:21.534 00:15:21.534 00:15:21.534 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.534 suites 1 1 n/a 0 0 00:15:21.534 tests 10 10 10 0 0 00:15:21.534 asserts 159 159 159 0 n/a 00:15:21.534 00:15:21.534 Elapsed time = 0.000 seconds 00:15:21.534 09:41:49 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:15:21.534 00:15:21.534 00:15:21.534 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.534 http://cunit.sourceforge.net/ 00:15:21.534 00:15:21.534 00:15:21.534 Suite: nvmf 00:15:21.534 Test: test_discovery_log ...passed 00:15:21.534 Test: test_discovery_log_with_filters ...passed 00:15:21.534 00:15:21.534 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.535 suites 1 1 n/a 0 0 00:15:21.535 tests 2 2 2 0 0 00:15:21.535 asserts 238 238 238 0 n/a 00:15:21.535 00:15:21.535 Elapsed time = 0.000 seconds 00:15:21.535 09:41:49 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:15:21.535 00:15:21.535 00:15:21.535 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.535 http://cunit.sourceforge.net/ 00:15:21.535 00:15:21.535 00:15:21.535 Suite: nvmf 00:15:21.535 Test: nvmf_test_create_subsystem ...[2024-07-15 09:41:49.500167] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:15:21.535 [2024-07-15 09:41:49.500373] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:15:21.535 [2024-07-15 09:41:49.500391] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:15:21.535 [2024-07-15 09:41:49.500402] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:15:21.535 [2024-07-15 09:41:49.500412] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:15:21.535 [2024-07-15 09:41:49.500421] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:15:21.535 [2024-07-15 09:41:49.500431] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:15:21.535 [2024-07-15 09:41:49.500440] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:15:21.535 [2024-07-15 09:41:49.500450] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:15:21.535 [2024-07-15 09:41:49.500461] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:15:21.535 [2024-07-15 09:41:49.500470] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:15:21.535 [2024-07-15 09:41:49.500480] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:15:21.535 [2024-07-15 09:41:49.500496] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:15:21.535 [2024-07-15 09:41:49.500506] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:15:21.535 [2024-07-15 09:41:49.500538] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:15:21.535 [2024-07-15 09:41:49.500549] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:15:21.535 [2024-07-15 09:41:49.500562] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:15:21.535 passed 00:15:21.535 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-15 09:41:49.500573] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:15:21.535 [2024-07-15 09:41:49.500595] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:15:21.535 [2024-07-15 09:41:49.500606] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:15:21.535 [2024-07-15 09:41:49.500621] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:15:21.535 [2024-07-15 09:41:49.500634] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:15:21.535 passed 00:15:21.535 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:15:21.535 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:15:21.535 Test: test_spdk_nvmf_ns_visible ...passed 00:15:21.535 Test: test_reservation_register ...passed 00:15:21.535 Test: test_reservation_register_with_ptpl ...[2024-07-15 09:41:49.500714] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:15:21.535 [2024-07-15 09:41:49.500730] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:15:21.535 [2024-07-15 09:41:49.500764] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2158:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:15:21.535 [2024-07-15 09:41:49.500805] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:15:21.535 [2024-07-15 09:41:49.500897] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 [2024-07-15 09:41:49.500915] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:15:21.535 passed 00:15:21.535 Test: test_reservation_acquire_preempt_1 ...passed 00:15:21.535 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-15 09:41:49.501142] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 passed 00:15:21.535 Test: test_reservation_release ...passed 00:15:21.535 Test: test_reservation_unregister_notification ...passed 00:15:21.535 Test: test_reservation_release_notification ...passed 00:15:21.535 Test: test_reservation_release_notification_write_exclusive ...[2024-07-15 09:41:49.501319] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 [2024-07-15 09:41:49.501349] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 [2024-07-15 09:41:49.501375] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 [2024-07-15 09:41:49.501397] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 passed 00:15:21.535 Test: test_reservation_clear_notification ...passed 00:15:21.535 Test: test_reservation_preempt_notification ...passed 00:15:21.535 Test: test_spdk_nvmf_ns_event ...passed 00:15:21.535 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:15:21.535 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:15:21.535 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:15:21.535 Test: test_nvmf_ns_reservation_report ...passed 00:15:21.535 Test: test_nvmf_nqn_is_valid ...[2024-07-15 09:41:49.501420] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 [2024-07-15 09:41:49.501442] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:21.535 [2024-07-15 09:41:49.501545] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:15:21.535 [2024-07-15 09:41:49.501570] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:15:21.535 [2024-07-15 09:41:49.501594] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3466:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:15:21.535 [2024-07-15 09:41:49.501624] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:15:21.535 passed 00:15:21.535 Test: test_nvmf_ns_reservation_restore ...passed 00:15:21.535 Test: test_nvmf_subsystem_state_change ...passed 00:15:21.535 Test: test_nvmf_reservation_custom_ops ...[2024-07-15 09:41:49.501635] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:7572b6dd-428e-11ef-a0af-c98d8ee52a9": uuid is not the correct length 00:15:21.535 [2024-07-15 09:41:49.501649] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:15:21.535 [2024-07-15 09:41:49.501690] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:15:21.535 passed 00:15:21.535 00:15:21.535 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.535 suites 1 1 n/a 0 0 00:15:21.535 tests 24 24 24 0 0 00:15:21.535 asserts 499 499 499 0 n/a 00:15:21.535 00:15:21.535 Elapsed time = 0.000 seconds 00:15:21.535 09:41:49 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:15:21.535 00:15:21.535 00:15:21.535 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.535 http://cunit.sourceforge.net/ 00:15:21.535 00:15:21.535 00:15:21.535 Suite: nvmf 00:15:21.535 Test: test_nvmf_tcp_create ...[2024-07-15 09:41:49.514800] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:15:21.535 passed 00:15:21.535 Test: test_nvmf_tcp_destroy ...passed 00:15:21.535 Test: test_nvmf_tcp_poll_group_create ...passed 00:15:21.535 Test: test_nvmf_tcp_send_c2h_data ...passed 00:15:21.535 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:15:21.535 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:15:21.535 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:15:21.535 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-15 09:41:49.531643] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.535 passed 00:15:21.535 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:15:21.535 Test: test_nvmf_tcp_icreq_handle ...passed 00:15:21.535 Test: test_nvmf_tcp_check_xfer_type ...passed 00:15:21.535 Test: test_nvmf_tcp_invalid_sgl ...passed 00:15:21.535 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-15 09:41:49.531704] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.535 [2024-07-15 09:41:49.531718] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.535 [2024-07-15 09:41:49.531786] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.531801] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.531850] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:15:21.536 [2024-07-15 09:41:49.531864] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.531878] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4a50 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.531890] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:15:21.536 [2024-07-15 09:41:49.531910] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4a50 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.531921] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.531932] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4a50 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.531943] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.531953] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4a50 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.531977] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2518:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:15:21.536 [2024-07-15 09:41:49.531991] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532008] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4a50 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532022] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x8211c42d8 00:15:21.536 [2024-07-15 09:41:49.532036] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532050] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532066] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2308:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x8211c4b48 00:15:21.536 [2024-07-15 09:41:49.532081] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532094] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532108] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:15:21.536 [2024-07-15 09:41:49.532124] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532141] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532168] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:15:21.536 [2024-07-15 09:41:49.532185] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532195] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532205] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532217] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532234] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532251] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532269] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532283] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532294] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532304] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 [2024-07-15 09:41:49.532316] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 [2024-07-15 09:41:49.532332] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:21.536 passed 00:15:21.536 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-15 09:41:49.532344] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211c4b48 is same with the state(5) to be set 00:15:21.536 passed 00:15:21.536 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-15 09:41:49.540754] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:15:21.536 passed 00:15:21.536 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-15 09:41:49.540799] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:15:21.536 passed 00:15:21.536 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:15:21.536 00:15:21.536 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.536 suites 1 1 n/a 0 0 00:15:21.536 tests 17 17 17 0 0 00:15:21.536 asserts 222 222 222 0 n/a 00:15:21.536 00:15:21.536 Elapsed time = 0.023 seconds 00:15:21.536 [2024-07-15 09:41:49.540980] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:15:21.536 [2024-07-15 09:41:49.541001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:15:21.536 [2024-07-15 09:41:49.541086] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:15:21.536 [2024-07-15 09:41:49.541100] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:15:21.536 09:41:49 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:15:21.536 00:15:21.536 00:15:21.536 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.536 http://cunit.sourceforge.net/ 00:15:21.536 00:15:21.536 00:15:21.536 Suite: nvmf 00:15:21.536 Test: test_nvmf_tgt_create_poll_group ...passed 00:15:21.536 00:15:21.536 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.536 suites 1 1 n/a 0 0 00:15:21.536 tests 1 1 1 0 0 00:15:21.536 asserts 17 17 17 0 n/a 00:15:21.536 00:15:21.536 Elapsed time = 0.008 seconds 00:15:21.536 00:15:21.536 real 0m0.088s 00:15:21.536 user 0m0.018s 00:15:21.536 sys 0m0.067s 00:15:21.536 09:41:49 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.536 09:41:49 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:15:21.536 ************************************ 00:15:21.536 END TEST unittest_nvmf 00:15:21.536 ************************************ 00:15:21.536 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.536 09:41:49 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:21.536 09:41:49 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:21.536 09:41:49 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:15:21.536 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.536 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.536 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.536 ************************************ 00:15:21.536 START TEST unittest_nvmf_rdma 00:15:21.536 ************************************ 00:15:21.536 09:41:49 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:15:21.536 00:15:21.536 00:15:21.536 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.536 http://cunit.sourceforge.net/ 00:15:21.536 00:15:21.536 00:15:21.536 Suite: nvmf 00:15:21.536 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-15 09:41:49.609088] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:15:21.536 [2024-07-15 09:41:49.609365] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:15:21.536 [2024-07-15 09:41:49.609399] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:15:21.536 passed 00:15:21.536 Test: test_spdk_nvmf_rdma_request_process ...passed 00:15:21.536 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:15:21.536 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:15:21.536 Test: test_nvmf_rdma_opts_init ...passed 00:15:21.536 Test: test_nvmf_rdma_request_free_data ...passed 00:15:21.536 Test: test_nvmf_rdma_resources_create ...passed 00:15:21.536 Test: test_nvmf_rdma_qpair_compare ...passed 00:15:21.536 Test: test_nvmf_rdma_resize_cq ...[2024-07-15 09:41:49.610337] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:15:21.536 Using CQ of insufficient size may lead to CQ overrun 00:15:21.536 passed 00:15:21.536 00:15:21.536 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.536 suites 1 1 n/a 0 0 00:15:21.536 tests 9 9 9 0 0 00:15:21.536 asserts 579 579 579 0 n/a 00:15:21.536 00:15:21.536 Elapsed time = 0.000 seconds 00:15:21.536 [2024-07-15 09:41:49.610371] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:15:21.536 [2024-07-15 09:41:49.610444] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:15:21.536 00:15:21.536 real 0m0.008s 00:15:21.536 user 0m0.008s 00:15:21.536 sys 0m0.000s 00:15:21.536 09:41:49 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.536 09:41:49 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:21.536 ************************************ 00:15:21.536 END TEST unittest_nvmf_rdma 00:15:21.536 ************************************ 00:15:21.799 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.799 09:41:49 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:21.799 09:41:49 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:15:21.799 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.799 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.799 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.799 ************************************ 00:15:21.799 START TEST unittest_scsi 00:15:21.799 ************************************ 00:15:21.799 09:41:49 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:15:21.799 09:41:49 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:15:21.799 00:15:21.799 00:15:21.799 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.799 http://cunit.sourceforge.net/ 00:15:21.799 00:15:21.799 00:15:21.799 Suite: dev_suite 00:15:21.799 Test: dev_destruct_null_dev ...passed 00:15:21.799 Test: dev_destruct_zero_luns ...passed 00:15:21.799 Test: dev_destruct_null_lun ...passed 00:15:21.799 Test: dev_destruct_success ...passed 00:15:21.799 Test: dev_construct_num_luns_zero ...passed 00:15:21.799 Test: dev_construct_no_lun_zero ...passed 00:15:21.800 Test: dev_construct_null_lun ...passed 00:15:21.800 Test: dev_construct_name_too_long ...passed 00:15:21.800 Test: dev_construct_success ...passed 00:15:21.800 Test: dev_construct_success_lun_zero_not_first ...passed 00:15:21.800 Test: dev_queue_mgmt_task_success ...[2024-07-15 09:41:49.660502] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:15:21.800 [2024-07-15 09:41:49.660729] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:15:21.800 [2024-07-15 09:41:49.660744] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:15:21.800 [2024-07-15 09:41:49.660757] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:15:21.800 passed 00:15:21.800 Test: dev_queue_task_success ...passed 00:15:21.800 Test: dev_stop_success ...passed 00:15:21.800 Test: dev_add_port_max_ports ...passed 00:15:21.800 Test: dev_add_port_construct_failure1 ...[2024-07-15 09:41:49.660801] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:15:21.800 passed 00:15:21.800 Test: dev_add_port_construct_failure2 ...passed 00:15:21.800 Test: dev_add_port_success1 ...passed 00:15:21.800 Test: dev_add_port_success2 ...passed 00:15:21.800 Test: dev_add_port_success3 ...passed 00:15:21.800 Test: dev_find_port_by_id_num_ports_zero ...passed 00:15:21.800 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:15:21.800 Test: dev_find_port_by_id_success ...passed 00:15:21.800 Test: dev_add_lun_bdev_not_found ...passed 00:15:21.800 Test: dev_add_lun_no_free_lun_id ...[2024-07-15 09:41:49.660817] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:15:21.800 [2024-07-15 09:41:49.660829] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:15:21.800 [2024-07-15 09:41:49.661093] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:15:21.800 passed 00:15:21.800 Test: dev_add_lun_success1 ...passed 00:15:21.800 Test: dev_add_lun_success2 ...passed 00:15:21.800 Test: dev_check_pending_tasks ...passed 00:15:21.800 Test: dev_iterate_luns ...passed 00:15:21.800 Test: dev_find_free_lun ...passed 00:15:21.800 00:15:21.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.800 suites 1 1 n/a 0 0 00:15:21.800 tests 29 29 29 0 0 00:15:21.800 asserts 97 97 97 0 n/a 00:15:21.800 00:15:21.800 Elapsed time = 0.000 seconds 00:15:21.800 09:41:49 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:15:21.800 00:15:21.800 00:15:21.800 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.800 http://cunit.sourceforge.net/ 00:15:21.800 00:15:21.800 00:15:21.800 Suite: lun_suite 00:15:21.800 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:15:21.800 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:15:21.800 Test: lun_task_mgmt_execute_lun_reset ...passed 00:15:21.800 Test: lun_task_mgmt_execute_target_reset ...passed 00:15:21.800 Test: lun_task_mgmt_execute_invalid_case ...passed 00:15:21.800 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:15:21.800 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:15:21.800 Test: lun_append_task_null_lun_not_supported ...passed 00:15:21.800 Test: lun_execute_scsi_task_pending ...passed 00:15:21.800 Test: lun_execute_scsi_task_complete ...passed 00:15:21.800 Test: lun_execute_scsi_task_resize ...passed 00:15:21.800 Test: lun_destruct_success ...passed 00:15:21.800 Test: lun_construct_null_ctx ...passed 00:15:21.800 Test: lun_construct_success ...passed 00:15:21.800 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:15:21.800 Test: lun_reset_task_suspend_scsi_task ...passed 00:15:21.800 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:15:21.800 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed[2024-07-15 09:41:49.670288] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:15:21.800 [2024-07-15 09:41:49.670606] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:15:21.800 [2024-07-15 09:41:49.670645] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:15:21.800 [2024-07-15 09:41:49.670724] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:15:21.800 00:15:21.800 00:15:21.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.800 suites 1 1 n/a 0 0 00:15:21.800 tests 18 18 18 0 0 00:15:21.800 asserts 153 153 153 0 n/a 00:15:21.800 00:15:21.800 Elapsed time = 0.000 seconds 00:15:21.800 09:41:49 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:15:21.800 00:15:21.800 00:15:21.800 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.800 http://cunit.sourceforge.net/ 00:15:21.800 00:15:21.800 00:15:21.800 Suite: scsi_suite 00:15:21.800 Test: scsi_init ...passed 00:15:21.800 00:15:21.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.800 suites 1 1 n/a 0 0 00:15:21.800 tests 1 1 1 0 0 00:15:21.800 asserts 1 1 1 0 n/a 00:15:21.800 00:15:21.800 Elapsed time = 0.000 seconds 00:15:21.800 09:41:49 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:15:21.800 00:15:21.800 00:15:21.800 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.800 http://cunit.sourceforge.net/ 00:15:21.800 00:15:21.800 00:15:21.800 Suite: translation_suite 00:15:21.800 Test: mode_select_6_test ...passed 00:15:21.800 Test: mode_select_6_test2 ...passed 00:15:21.800 Test: mode_sense_6_test ...passed 00:15:21.800 Test: mode_sense_10_test ...passed 00:15:21.800 Test: inquiry_evpd_test ...passed 00:15:21.800 Test: inquiry_standard_test ...passed 00:15:21.800 Test: inquiry_overflow_test ...passed 00:15:21.800 Test: task_complete_test ...passed 00:15:21.800 Test: lba_range_test ...passed 00:15:21.800 Test: xfer_len_test ...[2024-07-15 09:41:49.686552] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:15:21.800 passed 00:15:21.800 Test: xfer_test ...passed 00:15:21.800 Test: scsi_name_padding_test ...passed 00:15:21.800 Test: get_dif_ctx_test ...passed 00:15:21.800 Test: unmap_split_test ...passed 00:15:21.800 00:15:21.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.800 suites 1 1 n/a 0 0 00:15:21.800 tests 14 14 14 0 0 00:15:21.800 asserts 1205 1205 1205 0 n/a 00:15:21.800 00:15:21.800 Elapsed time = 0.000 seconds 00:15:21.800 09:41:49 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:15:21.800 00:15:21.800 00:15:21.800 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.800 http://cunit.sourceforge.net/ 00:15:21.800 00:15:21.800 00:15:21.800 Suite: reservation_suite 00:15:21.800 Test: test_reservation_register ...passed 00:15:21.800 Test: test_reservation_reserve ...passed 00:15:21.800 Test: test_all_registrant_reservation_reserve ...passed 00:15:21.800 Test: test_all_registrant_reservation_access ...passed 00:15:21.800 Test: test_reservation_preempt_non_all_regs ...[2024-07-15 09:41:49.693382] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693647] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693666] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:15:21.800 [2024-07-15 09:41:49.693680] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:15:21.800 [2024-07-15 09:41:49.693697] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693728] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693749] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:15:21.800 [2024-07-15 09:41:49.693762] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:15:21.800 passed 00:15:21.800 Test: test_reservation_preempt_all_regs ...[2024-07-15 09:41:49.693780] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693804] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:15:21.800 passed 00:15:21.800 Test: test_reservation_cmds_conflict ...passed 00:15:21.800 Test: test_scsi2_reserve_release ...passed 00:15:21.800 Test: test_pr_with_scsi2_reserve_release ...passed 00:15:21.800 00:15:21.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.800 suites 1 1 n/a 0 0 00:15:21.800 tests 9 9 9 0 0 00:15:21.800 asserts 344 344 344 0 n/a 00:15:21.800 00:15:21.800 Elapsed time = 0.000 seconds 00:15:21.800 [2024-07-15 09:41:49.693825] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693845] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 [2024-07-15 09:41:49.693857] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:15:21.800 [2024-07-15 09:41:49.693869] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:15:21.800 [2024-07-15 09:41:49.693883] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:15:21.800 [2024-07-15 09:41:49.693895] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:15:21.800 [2024-07-15 09:41:49.693907] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:15:21.800 [2024-07-15 09:41:49.693936] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:21.800 00:15:21.800 real 0m0.039s 00:15:21.800 user 0m0.013s 00:15:21.801 sys 0m0.030s 00:15:21.801 09:41:49 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.801 ************************************ 00:15:21.801 END TEST unittest_scsi 00:15:21.801 ************************************ 00:15:21.801 09:41:49 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.801 09:41:49 unittest -- unit/unittest.sh@278 -- # uname -s 00:15:21.801 09:41:49 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:15:21.801 09:41:49 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.801 ************************************ 00:15:21.801 START TEST unittest_thread 00:15:21.801 ************************************ 00:15:21.801 09:41:49 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:15:21.801 00:15:21.801 00:15:21.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.801 http://cunit.sourceforge.net/ 00:15:21.801 00:15:21.801 00:15:21.801 Suite: io_channel 00:15:21.801 Test: thread_alloc ...passed 00:15:21.801 Test: thread_send_msg ...passed 00:15:21.801 Test: thread_poller ...passed 00:15:21.801 Test: poller_pause ...passed 00:15:21.801 Test: thread_for_each ...passed 00:15:21.801 Test: for_each_channel_remove ...passed 00:15:21.801 Test: for_each_channel_unreg ...[2024-07-15 09:41:49.752250] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x82103dc84 already registered (old:0x32790fa67000 new:0x32790fa67180) 00:15:21.801 passed 00:15:21.801 Test: thread_name ...passed 00:15:21.801 Test: channel ...[2024-07-15 09:41:49.752966] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x228838 00:15:21.801 passed 00:15:21.801 Test: channel_destroy_races ...passed 00:15:21.801 Test: thread_exit_test ...[2024-07-15 09:41:49.753512] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x32790fa2ca80 got timeout, and move it to the exited state forcefully 00:15:21.801 passed 00:15:21.801 Test: thread_update_stats_test ...passed 00:15:21.801 Test: nested_channel ...passed 00:15:21.801 Test: device_unregister_and_thread_exit_race ...passed 00:15:21.801 Test: cache_closest_timed_poller ...passed 00:15:21.801 Test: multi_timed_pollers_have_same_expiration ...passed 00:15:21.801 Test: io_device_lookup ...passed 00:15:21.801 Test: spdk_spin ...[2024-07-15 09:41:49.754686] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:15:21.801 [2024-07-15 09:41:49.754713] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82103dc80 00:15:21.801 [2024-07-15 09:41:49.754727] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:15:21.801 [2024-07-15 09:41:49.754908] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:15:21.801 [2024-07-15 09:41:49.754922] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82103dc80 00:15:21.801 [2024-07-15 09:41:49.754933] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:15:21.801 [2024-07-15 09:41:49.754944] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82103dc80 00:15:21.801 [2024-07-15 09:41:49.754955] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:15:21.801 [2024-07-15 09:41:49.754966] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82103dc80 00:15:21.801 [2024-07-15 09:41:49.754977] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:15:21.801 [2024-07-15 09:41:49.754988] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82103dc80 00:15:21.801 passed 00:15:21.801 Test: for_each_channel_and_thread_exit_race ...passed 00:15:21.801 Test: for_each_thread_and_thread_exit_race ...passed 00:15:21.801 00:15:21.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.801 suites 1 1 n/a 0 0 00:15:21.801 tests 20 20 20 0 0 00:15:21.801 asserts 409 409 409 0 n/a 00:15:21.801 00:15:21.801 Elapsed time = 0.008 seconds 00:15:21.801 00:15:21.801 real 0m0.013s 00:15:21.801 user 0m0.000s 00:15:21.801 sys 0m0.016s 00:15:21.801 09:41:49 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.801 ************************************ 00:15:21.801 END TEST unittest_thread 00:15:21.801 ************************************ 00:15:21.801 09:41:49 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.801 09:41:49 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.801 ************************************ 00:15:21.801 START TEST unittest_iobuf 00:15:21.801 ************************************ 00:15:21.801 09:41:49 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:15:21.801 00:15:21.801 00:15:21.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.801 http://cunit.sourceforge.net/ 00:15:21.801 00:15:21.801 00:15:21.801 Suite: io_channel 00:15:21.801 Test: iobuf ...passed 00:15:21.801 Test: iobuf_cache ...[2024-07-15 09:41:49.803675] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:15:21.801 [2024-07-15 09:41:49.803934] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:15:21.801 passed 00:15:21.801 00:15:21.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.801 suites 1 1 n/a 0 0 00:15:21.801 tests 2 2 2 0 0 00:15:21.801 asserts 107 107 107 0 n/a 00:15:21.801 00:15:21.801 Elapsed time = 0.000 seconds 00:15:21.801 [2024-07-15 09:41:49.803974] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:15:21.801 [2024-07-15 09:41:49.803993] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:15:21.801 [2024-07-15 09:41:49.804014] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:15:21.801 [2024-07-15 09:41:49.804032] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:15:21.801 00:15:21.801 real 0m0.006s 00:15:21.801 user 0m0.000s 00:15:21.801 sys 0m0.008s 00:15:21.801 09:41:49 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.801 ************************************ 00:15:21.801 END TEST unittest_iobuf 00:15:21.801 ************************************ 00:15:21.801 09:41:49 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:21.801 09:41:49 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.801 09:41:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:21.801 ************************************ 00:15:21.801 START TEST unittest_util 00:15:21.801 ************************************ 00:15:21.801 09:41:49 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:15:21.801 09:41:49 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:15:21.801 00:15:21.801 00:15:21.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.801 http://cunit.sourceforge.net/ 00:15:21.801 00:15:21.801 00:15:21.801 Suite: base64 00:15:21.801 Test: test_base64_get_encoded_strlen ...passed 00:15:21.801 Test: test_base64_get_decoded_len ...passed 00:15:21.801 Test: test_base64_encode ...passed 00:15:21.801 Test: test_base64_decode ...passed 00:15:21.801 Test: test_base64_urlsafe_encode ...passed 00:15:21.801 Test: test_base64_urlsafe_decode ...passed 00:15:21.801 00:15:21.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.801 suites 1 1 n/a 0 0 00:15:21.801 tests 6 6 6 0 0 00:15:21.801 asserts 112 112 112 0 n/a 00:15:21.801 00:15:21.801 Elapsed time = 0.000 seconds 00:15:21.801 09:41:49 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:15:21.801 00:15:21.801 00:15:21.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.801 http://cunit.sourceforge.net/ 00:15:21.801 00:15:21.801 00:15:21.801 Suite: bit_array 00:15:21.801 Test: test_1bit ...passed 00:15:21.801 Test: test_64bit ...passed 00:15:21.801 Test: test_find ...passed 00:15:21.801 Test: test_resize ...passed 00:15:21.801 Test: test_errors ...passed 00:15:21.801 Test: test_count ...passed 00:15:21.801 Test: test_mask_store_load ...passed 00:15:21.801 Test: test_mask_clear ...passed 00:15:21.801 00:15:21.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.801 suites 1 1 n/a 0 0 00:15:21.801 tests 8 8 8 0 0 00:15:21.801 asserts 5075 5075 5075 0 n/a 00:15:21.801 00:15:21.801 Elapsed time = 0.000 seconds 00:15:21.801 09:41:49 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:15:21.801 00:15:21.801 00:15:21.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.801 http://cunit.sourceforge.net/ 00:15:21.801 00:15:21.801 00:15:21.801 Suite: cpuset 00:15:21.801 Test: test_cpuset ...passed 00:15:21.801 Test: test_cpuset_parse ...[2024-07-15 09:41:49.859549] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:15:21.801 passed 00:15:21.802 Test: test_cpuset_fmt ...passed 00:15:21.802 Test: test_cpuset_foreach ...passed 00:15:21.802 00:15:21.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.802 suites 1 1 n/a 0 0 00:15:21.802 tests 4 4 4 0 0 00:15:21.802 asserts 90 90 90 0 n/a 00:15:21.802 00:15:21.802 Elapsed time = 0.000 seconds 00:15:21.802 [2024-07-15 09:41:49.859853] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:15:21.802 [2024-07-15 09:41:49.859870] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:15:21.802 [2024-07-15 09:41:49.859884] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:15:21.802 [2024-07-15 09:41:49.859896] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:15:21.802 [2024-07-15 09:41:49.859908] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:15:21.802 [2024-07-15 09:41:49.859920] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:15:21.802 [2024-07-15 09:41:49.859932] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:15:21.802 09:41:49 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:15:21.802 00:15:21.802 00:15:21.802 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.802 http://cunit.sourceforge.net/ 00:15:21.802 00:15:21.802 00:15:21.802 Suite: crc16 00:15:21.802 Test: test_crc16_t10dif ...passed 00:15:21.802 Test: test_crc16_t10dif_seed ...passed 00:15:21.802 Test: test_crc16_t10dif_copy ...passed 00:15:21.802 00:15:21.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.802 suites 1 1 n/a 0 0 00:15:21.802 tests 3 3 3 0 0 00:15:21.802 asserts 5 5 5 0 n/a 00:15:21.802 00:15:21.802 Elapsed time = 0.000 seconds 00:15:21.802 09:41:49 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:15:21.802 00:15:21.802 00:15:21.802 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.802 http://cunit.sourceforge.net/ 00:15:21.802 00:15:21.802 00:15:21.802 Suite: crc32_ieee 00:15:21.802 Test: test_crc32_ieee ...passed 00:15:21.802 00:15:21.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.802 suites 1 1 n/a 0 0 00:15:21.802 tests 1 1 1 0 0 00:15:21.802 asserts 1 1 1 0 n/a 00:15:21.802 00:15:21.802 Elapsed time = 0.000 seconds 00:15:21.802 09:41:49 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:15:21.802 00:15:21.802 00:15:21.802 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.802 http://cunit.sourceforge.net/ 00:15:21.802 00:15:21.802 00:15:21.802 Suite: crc32c 00:15:21.802 Test: test_crc32c ...passed 00:15:21.802 Test: test_crc32c_nvme ...passed 00:15:21.802 00:15:21.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.802 suites 1 1 n/a 0 0 00:15:21.802 tests 2 2 2 0 0 00:15:21.802 asserts 16 16 16 0 n/a 00:15:21.802 00:15:21.802 Elapsed time = 0.000 seconds 00:15:21.802 09:41:49 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:15:21.802 00:15:21.802 00:15:21.802 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.802 http://cunit.sourceforge.net/ 00:15:21.802 00:15:21.802 00:15:21.802 Suite: crc64 00:15:21.802 Test: test_crc64_nvme ...passed 00:15:21.802 00:15:21.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.802 suites 1 1 n/a 0 0 00:15:21.802 tests 1 1 1 0 0 00:15:21.802 asserts 4 4 4 0 n/a 00:15:21.802 00:15:21.802 Elapsed time = 0.000 seconds 00:15:21.802 09:41:49 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:15:22.065 00:15:22.065 00:15:22.065 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.065 http://cunit.sourceforge.net/ 00:15:22.065 00:15:22.065 00:15:22.065 Suite: string 00:15:22.065 Test: test_parse_ip_addr ...passed 00:15:22.065 Test: test_str_chomp ...passed 00:15:22.065 Test: test_parse_capacity ...passed 00:15:22.065 Test: test_sprintf_append_realloc ...passed 00:15:22.065 Test: test_strtol ...passed 00:15:22.065 Test: test_strtoll ...passed 00:15:22.065 Test: test_strarray ...passed 00:15:22.065 Test: test_strcpy_replace ...passed 00:15:22.065 00:15:22.065 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.065 suites 1 1 n/a 0 0 00:15:22.065 tests 8 8 8 0 0 00:15:22.065 asserts 161 161 161 0 n/a 00:15:22.065 00:15:22.065 Elapsed time = 0.000 seconds 00:15:22.065 09:41:49 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:15:22.065 00:15:22.065 00:15:22.065 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.065 http://cunit.sourceforge.net/ 00:15:22.065 00:15:22.065 00:15:22.065 Suite: dif 00:15:22.065 Test: dif_generate_and_verify_test ...[2024-07-15 09:41:49.903677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:15:22.065 [2024-07-15 09:41:49.904226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:15:22.065 [2024-07-15 09:41:49.904332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:15:22.065 [2024-07-15 09:41:49.904433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:15:22.065 [2024-07-15 09:41:49.904552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:15:22.065 passed 00:15:22.065 Test: dif_disable_check_test ...[2024-07-15 09:41:49.904649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:15:22.065 [2024-07-15 09:41:49.904915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:15:22.065 [2024-07-15 09:41:49.904994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:15:22.065 [2024-07-15 09:41:49.905080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:15:22.065 passed 00:15:22.065 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-15 09:41:49.905301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:15:22.065 [2024-07-15 09:41:49.905368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:15:22.065 [2024-07-15 09:41:49.905435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:15:22.065 [2024-07-15 09:41:49.905501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:15:22.065 [2024-07-15 09:41:49.905566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:22.065 [2024-07-15 09:41:49.905632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:22.065 [2024-07-15 09:41:49.905696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:22.065 [2024-07-15 09:41:49.905764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:22.065 [2024-07-15 09:41:49.905869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:15:22.065 passed 00:15:22.065 Test: dif_apptag_mask_test ...[2024-07-15 09:41:49.905949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:15:22.065 [2024-07-15 09:41:49.906049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:15:22.065 passed 00:15:22.065 Test: dif_sec_512_md_0_error_test ...passed 00:15:22.065 Test: dif_sec_4096_md_0_error_test ...passed 00:15:22.066 Test: dif_sec_4100_md_128_error_test ...[2024-07-15 09:41:49.906122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:15:22.066 [2024-07-15 09:41:49.906190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:15:22.066 [2024-07-15 09:41:49.906236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:22.066 [2024-07-15 09:41:49.906256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:22.066 [2024-07-15 09:41:49.906282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:22.066 [2024-07-15 09:41:49.906307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:15:22.066 [2024-07-15 09:41:49.906323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:15:22.066 passed 00:15:22.066 Test: dif_guard_seed_test ...passed 00:15:22.066 Test: dif_guard_value_test ...passed 00:15:22.066 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:15:22.066 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:22.066 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 09:41:49.915489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd48, Actual=fd4c 00:15:22.066 [2024-07-15 09:41:49.915868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe25, Actual=fe21 00:15:22.066 [2024-07-15 09:41:49.916231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.916578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.916924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.066 [2024-07-15 09:41:49.917282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.066 [2024-07-15 09:41:49.917628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=7a00 00:15:22.066 [2024-07-15 09:41:49.917879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe21, Actual=76c3 00:15:22.066 [2024-07-15 09:41:49.918127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753e9, Actual=1ab753ed 00:15:22.066 [2024-07-15 09:41:49.918493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574664, Actual=38574660 00:15:22.066 [2024-07-15 09:41:49.918836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.919199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.919559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40000005b 00:15:22.066 [2024-07-15 09:41:49.919931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40000005b 00:15:22.066 [2024-07-15 09:41:49.920279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=3acdf82b 00:15:22.066 [2024-07-15 09:41:49.920531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574660, Actual=a5804096 00:15:22.066 [2024-07-15 09:41:49.920780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.066 [2024-07-15 09:41:49.921128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a294837a266, Actual=88010a2d4837a266 00:15:22.066 [2024-07-15 09:41:49.921477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.921827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.922174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.066 [2024-07-15 09:41:49.922540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.066 [2024-07-15 09:41:49.922890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.066 [2024-07-15 09:41:49.923151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837a266, Actual=b44dc39ee99021c9 00:15:22.066 passed 00:15:22.066 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-15 09:41:49.923261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:15:22.066 [2024-07-15 09:41:49.923311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:15:22.066 [2024-07-15 09:41:49.923358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.923419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.923466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.066 [2024-07-15 09:41:49.923513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.066 [2024-07-15 09:41:49.923560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a00 00:15:22.066 [2024-07-15 09:41:49.923606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=76c3 00:15:22.066 [2024-07-15 09:41:49.923652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:15:22.066 [2024-07-15 09:41:49.923699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:15:22.066 [2024-07-15 09:41:49.923764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.923812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.923859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.066 [2024-07-15 09:41:49.923906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.066 [2024-07-15 09:41:49.923953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3acdf82b 00:15:22.066 [2024-07-15 09:41:49.923998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a5804096 00:15:22.066 [2024-07-15 09:41:49.924044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.066 [2024-07-15 09:41:49.924110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a294837a266, Actual=88010a2d4837a266 00:15:22.066 [2024-07-15 09:41:49.924157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.924203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.924249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.066 [2024-07-15 09:41:49.924296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.066 [2024-07-15 09:41:49.924342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.066 [2024-07-15 09:41:49.924388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=b44dc39ee99021c9 00:15:22.066 passed 00:15:22.066 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-15 09:41:49.924440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:15:22.066 [2024-07-15 09:41:49.924487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:15:22.066 [2024-07-15 09:41:49.924534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.924581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.924628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.066 [2024-07-15 09:41:49.924675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.066 [2024-07-15 09:41:49.924722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a00 00:15:22.066 [2024-07-15 09:41:49.924767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=76c3 00:15:22.066 [2024-07-15 09:41:49.924813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:15:22.066 [2024-07-15 09:41:49.924859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:15:22.066 [2024-07-15 09:41:49.924906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.924953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.066 [2024-07-15 09:41:49.925000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.066 [2024-07-15 09:41:49.925047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.066 [2024-07-15 09:41:49.925093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3acdf82b 00:15:22.066 [2024-07-15 09:41:49.925139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a5804096 00:15:22.067 [2024-07-15 09:41:49.925185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.067 [2024-07-15 09:41:49.925238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a294837a266, Actual=88010a2d4837a266 00:15:22.067 [2024-07-15 09:41:49.925271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.925302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.925333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.925364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 passed 00:15:22.067 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-15 09:41:49.925395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.067 [2024-07-15 09:41:49.925426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=b44dc39ee99021c9 00:15:22.067 [2024-07-15 09:41:49.925459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:15:22.067 [2024-07-15 09:41:49.925490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:15:22.067 [2024-07-15 09:41:49.925521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.925553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.925584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.925616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.925647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a00 00:15:22.067 [2024-07-15 09:41:49.925678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=76c3 00:15:22.067 [2024-07-15 09:41:49.925709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:15:22.067 [2024-07-15 09:41:49.925741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:15:22.067 [2024-07-15 09:41:49.925772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.925804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.925835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.067 [2024-07-15 09:41:49.925867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.067 [2024-07-15 09:41:49.925899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3acdf82b 00:15:22.067 [2024-07-15 09:41:49.925929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a5804096 00:15:22.067 [2024-07-15 09:41:49.925960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.067 [2024-07-15 09:41:49.925992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a294837a266, Actual=88010a2d4837a266 00:15:22.067 [2024-07-15 09:41:49.926024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.926117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 passed 00:15:22.067 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-15 09:41:49.926148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.067 [2024-07-15 09:41:49.926179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=b44dc39ee99021c9 00:15:22.067 [2024-07-15 09:41:49.926212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:15:22.067 [2024-07-15 09:41:49.926243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:15:22.067 [2024-07-15 09:41:49.926274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 passed 00:15:22.067 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-15 09:41:49.926305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.926367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.926398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a00 00:15:22.067 [2024-07-15 09:41:49.926428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=76c3 00:15:22.067 [2024-07-15 09:41:49.926467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:15:22.067 [2024-07-15 09:41:49.926498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:15:22.067 [2024-07-15 09:41:49.926529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.067 [2024-07-15 09:41:49.926622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.067 [2024-07-15 09:41:49.926653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3acdf82b 00:15:22.067 [2024-07-15 09:41:49.926683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a5804096 00:15:22.067 [2024-07-15 09:41:49.926714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.067 [2024-07-15 09:41:49.926749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a294837a266, Actual=88010a2d4837a266 00:15:22.067 [2024-07-15 09:41:49.926802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.926876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.926920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.926952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.067 [2024-07-15 09:41:49.926983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=b44dc39ee99021c9 00:15:22.067 passed 00:15:22.067 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-15 09:41:49.927016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:15:22.067 [2024-07-15 09:41:49.927054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:15:22.067 [2024-07-15 09:41:49.927086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.927117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.927149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.927181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.927213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7a00 00:15:22.067 [2024-07-15 09:41:49.927244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=76c3 00:15:22.067 passed 00:15:22.067 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-15 09:41:49.927277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753e9, Actual=1ab753ed 00:15:22.067 [2024-07-15 09:41:49.927309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574664, Actual=38574660 00:15:22.067 [2024-07-15 09:41:49.927340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.927371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.927402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.067 [2024-07-15 09:41:49.927433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000058 00:15:22.067 [2024-07-15 09:41:49.927464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3acdf82b 00:15:22.067 [2024-07-15 09:41:49.927498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a5804096 00:15:22.067 [2024-07-15 09:41:49.927529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.067 [2024-07-15 09:41:49.927561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a294837a266, Actual=88010a2d4837a266 00:15:22.067 [2024-07-15 09:41:49.927592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.927624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:15:22.067 [2024-07-15 09:41:49.927655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.927686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:15:22.067 [2024-07-15 09:41:49.927736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.067 [2024-07-15 09:41:49.927768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=b44dc39ee99021c9 00:15:22.067 passed 00:15:22.067 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:15:22.068 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:15:22.068 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:15:22.068 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 09:41:49.931974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd48, Actual=fd4c 00:15:22.068 [2024-07-15 09:41:49.932105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=f2e, Actual=f2a 00:15:22.068 [2024-07-15 09:41:49.932228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.932349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.932468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.932590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.932710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=7a00 00:15:22.068 [2024-07-15 09:41:49.932833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=ee39 00:15:22.068 [2024-07-15 09:41:49.932954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753e9, Actual=1ab753ed 00:15:22.068 [2024-07-15 09:41:49.933075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=8caa56c2, Actual=8caa56c6 00:15:22.068 [2024-07-15 09:41:49.933195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.933316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.933438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40000005b 00:15:22.068 [2024-07-15 09:41:49.933560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40000005b 00:15:22.068 [2024-07-15 09:41:49.933682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=3acdf82b 00:15:22.068 [2024-07-15 09:41:49.933805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=e45c7719 00:15:22.068 [2024-07-15 09:41:49.933926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.068 [2024-07-15 09:41:49.934049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=702699b4a6e9b8ad, Actual=702699b0a6e9b8ad 00:15:22.068 [2024-07-15 09:41:49.934170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.934292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.934415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.934538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.934660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.068 passed 00:15:22.068 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 09:41:49.934783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=94f9bd38d7dfbd5a 00:15:22.068 [2024-07-15 09:41:49.934820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:15:22.068 [2024-07-15 09:41:49.934853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:15:22.068 [2024-07-15 09:41:49.934885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.934918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.934950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.068 [2024-07-15 09:41:49.934982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.068 [2024-07-15 09:41:49.935015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a00 00:15:22.068 [2024-07-15 09:41:49.935054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=1a22 00:15:22.068 [2024-07-15 09:41:49.935084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753e9, Actual=1ab753ed 00:15:22.068 [2024-07-15 09:41:49.935112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=6e9c7740, Actual=6e9c7744 00:15:22.068 [2024-07-15 09:41:49.935139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.935166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.935194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:15:22.068 [2024-07-15 09:41:49.935222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:15:22.068 [2024-07-15 09:41:49.935249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=3acdf82b 00:15:22.068 [2024-07-15 09:41:49.935277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=66a569b 00:15:22.068 [2024-07-15 09:41:49.935304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.068 [2024-07-15 09:41:49.935333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=90bb0d5499d5b337, Actual=90bb0d5099d5b337 00:15:22.068 [2024-07-15 09:41:49.935360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.935387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.068 passed 00:15:22.068 Test: dix_sec_512_md_0_error ...passed 00:15:22.068 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:15:22.068 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-15 09:41:49.935414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.068 [2024-07-15 09:41:49.935442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.068 [2024-07-15 09:41:49.935469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.068 [2024-07-15 09:41:49.935498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=746429d8e8e3b6c0 00:15:22.068 [2024-07-15 09:41:49.935507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:22.068 passed 00:15:22.068 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:15:22.068 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:22.068 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:15:22.068 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:15:22.068 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:15:22.068 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:15:22.068 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:22.068 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 09:41:49.938913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd48, Actual=fd4c 00:15:22.068 [2024-07-15 09:41:49.939023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=f2e, Actual=f2a 00:15:22.068 [2024-07-15 09:41:49.939126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.939229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.939332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.939435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.939537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=7a00 00:15:22.068 [2024-07-15 09:41:49.939640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=ee39 00:15:22.068 [2024-07-15 09:41:49.939763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753e9, Actual=1ab753ed 00:15:22.068 [2024-07-15 09:41:49.939865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=8caa56c2, Actual=8caa56c6 00:15:22.068 [2024-07-15 09:41:49.939966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.940068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.940169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40000005b 00:15:22.068 [2024-07-15 09:41:49.940271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40000005b 00:15:22.068 [2024-07-15 09:41:49.940372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=3acdf82b 00:15:22.068 [2024-07-15 09:41:49.940474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=e45c7719 00:15:22.068 [2024-07-15 09:41:49.940575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.068 [2024-07-15 09:41:49.940677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=702699b4a6e9b8ad, Actual=702699b0a6e9b8ad 00:15:22.068 [2024-07-15 09:41:49.940796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.940897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:15:22.068 [2024-07-15 09:41:49.940999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.941101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=5f 00:15:22.068 [2024-07-15 09:41:49.941204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.068 passed 00:15:22.068 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 09:41:49.941306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=94f9bd38d7dfbd5a 00:15:22.069 [2024-07-15 09:41:49.941338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:15:22.069 [2024-07-15 09:41:49.941366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:15:22.069 [2024-07-15 09:41:49.941393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.069 [2024-07-15 09:41:49.941420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.069 [2024-07-15 09:41:49.941447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.069 [2024-07-15 09:41:49.941474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.069 [2024-07-15 09:41:49.941502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7a00 00:15:22.069 [2024-07-15 09:41:49.941530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=1a22 00:15:22.069 [2024-07-15 09:41:49.941557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753e9, Actual=1ab753ed 00:15:22.069 [2024-07-15 09:41:49.941584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=6e9c7740, Actual=6e9c7744 00:15:22.069 [2024-07-15 09:41:49.941610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.069 [2024-07-15 09:41:49.941636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.069 [2024-07-15 09:41:49.941663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:15:22.069 [2024-07-15 09:41:49.941690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000059 00:15:22.069 [2024-07-15 09:41:49.941716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=3acdf82b 00:15:22.069 [2024-07-15 09:41:49.941743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=66a569b 00:15:22.069 [2024-07-15 09:41:49.941771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7768ecc20d3, Actual=a576a7728ecc20d3 00:15:22.069 [2024-07-15 09:41:49.941797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=90bb0d5499d5b337, Actual=90bb0d5099d5b337 00:15:22.069 [2024-07-15 09:41:49.941824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.069 [2024-07-15 09:41:49.941851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:15:22.069 [2024-07-15 09:41:49.941879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.069 [2024-07-15 09:41:49.941905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:15:22.069 [2024-07-15 09:41:49.941932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=ac18db666d16796d 00:15:22.069 passed 00:15:22.069 Test: set_md_interleave_iovs_test ...[2024-07-15 09:41:49.941958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=746429d8e8e3b6c0 00:15:22.069 passed 00:15:22.069 Test: set_md_interleave_iovs_split_test ...passed 00:15:22.069 Test: dif_generate_stream_pi_16_test ...passed 00:15:22.069 Test: dif_generate_stream_test ...passed 00:15:22.069 Test: set_md_interleave_iovs_alignment_test ...passed 00:15:22.069 Test: dif_generate_split_test ...[2024-07-15 09:41:49.942555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:15:22.069 passed 00:15:22.069 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:15:22.069 Test: dif_verify_split_test ...passed 00:15:22.069 Test: dif_verify_stream_multi_segments_test ...passed 00:15:22.069 Test: update_crc32c_pi_16_test ...passed 00:15:22.069 Test: update_crc32c_test ...passed 00:15:22.069 Test: dif_update_crc32c_split_test ...passed 00:15:22.069 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:15:22.069 Test: get_range_with_md_test ...passed 00:15:22.069 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:15:22.069 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:15:22.069 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:15:22.069 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:15:22.069 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:15:22.069 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:15:22.069 Test: dif_generate_and_verify_unmap_test ...passed 00:15:22.069 00:15:22.069 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.069 suites 1 1 n/a 0 0 00:15:22.069 tests 79 79 79 0 0 00:15:22.069 asserts 3584 3584 3584 0 n/a 00:15:22.069 00:15:22.069 Elapsed time = 0.047 seconds 00:15:22.069 09:41:49 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:15:22.069 00:15:22.069 00:15:22.069 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.069 http://cunit.sourceforge.net/ 00:15:22.069 00:15:22.069 00:15:22.069 Suite: iov 00:15:22.069 Test: test_single_iov ...passed 00:15:22.069 Test: test_simple_iov ...passed 00:15:22.069 Test: test_complex_iov ...passed 00:15:22.069 Test: test_iovs_to_buf ...passed 00:15:22.069 Test: test_buf_to_iovs ...passed 00:15:22.069 Test: test_memset ...passed 00:15:22.069 Test: test_iov_one ...passed 00:15:22.069 Test: test_iov_xfer ...passed 00:15:22.069 00:15:22.069 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.069 suites 1 1 n/a 0 0 00:15:22.069 tests 8 8 8 0 0 00:15:22.069 asserts 156 156 156 0 n/a 00:15:22.069 00:15:22.069 Elapsed time = 0.000 seconds 00:15:22.069 09:41:49 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:15:22.069 00:15:22.069 00:15:22.069 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.069 http://cunit.sourceforge.net/ 00:15:22.069 00:15:22.069 00:15:22.069 Suite: math 00:15:22.069 Test: test_serial_number_arithmetic ...passed 00:15:22.069 Suite: erase 00:15:22.069 Test: test_memset_s ...passed 00:15:22.069 00:15:22.069 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.069 suites 2 2 n/a 0 0 00:15:22.069 tests 2 2 2 0 0 00:15:22.069 asserts 18 18 18 0 n/a 00:15:22.069 00:15:22.069 Elapsed time = 0.000 seconds 00:15:22.069 09:41:49 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:15:22.069 00:15:22.069 00:15:22.069 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.069 http://cunit.sourceforge.net/ 00:15:22.069 00:15:22.069 00:15:22.069 Suite: pipe 00:15:22.069 Test: test_create_destroy ...passed 00:15:22.069 Test: test_write_get_buffer ...passed 00:15:22.069 Test: test_write_advance ...passed 00:15:22.069 Test: test_read_get_buffer ...passed 00:15:22.069 Test: test_read_advance ...passed 00:15:22.069 Test: test_data ...passed 00:15:22.069 00:15:22.069 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.069 suites 1 1 n/a 0 0 00:15:22.069 tests 6 6 6 0 0 00:15:22.069 asserts 251 251 251 0 n/a 00:15:22.069 00:15:22.069 Elapsed time = 0.000 seconds 00:15:22.069 09:41:49 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:15:22.069 00:15:22.069 00:15:22.069 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.069 http://cunit.sourceforge.net/ 00:15:22.069 00:15:22.069 00:15:22.069 Suite: xor 00:15:22.069 Test: test_xor_gen ...passed 00:15:22.069 00:15:22.069 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.069 suites 1 1 n/a 0 0 00:15:22.069 tests 1 1 1 0 0 00:15:22.069 asserts 17 17 17 0 n/a 00:15:22.069 00:15:22.069 Elapsed time = 0.000 seconds 00:15:22.069 00:15:22.069 real 0m0.128s 00:15:22.069 user 0m0.069s 00:15:22.069 sys 0m0.060s 00:15:22.069 09:41:49 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.069 09:41:49 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:15:22.069 ************************************ 00:15:22.069 END TEST unittest_util 00:15:22.069 ************************************ 00:15:22.069 09:41:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:22.069 09:41:50 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:22.070 09:41:50 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 ************************************ 00:15:22.070 START TEST unittest_dma 00:15:22.070 ************************************ 00:15:22.070 09:41:50 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:15:22.070 00:15:22.070 00:15:22.070 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.070 http://cunit.sourceforge.net/ 00:15:22.070 00:15:22.070 00:15:22.070 Suite: dma_suite 00:15:22.070 Test: test_dma ...[2024-07-15 09:41:50.021491] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:15:22.070 passed 00:15:22.070 00:15:22.070 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.070 suites 1 1 n/a 0 0 00:15:22.070 tests 1 1 1 0 0 00:15:22.070 asserts 54 54 54 0 n/a 00:15:22.070 00:15:22.070 Elapsed time = 0.000 seconds 00:15:22.070 00:15:22.070 real 0m0.006s 00:15:22.070 user 0m0.005s 00:15:22.070 sys 0m0.007s 00:15:22.070 09:41:50 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.070 09:41:50 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 ************************************ 00:15:22.070 END TEST unittest_dma 00:15:22.070 ************************************ 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:22.070 09:41:50 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 ************************************ 00:15:22.070 START TEST unittest_init 00:15:22.070 ************************************ 00:15:22.070 09:41:50 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:15:22.070 09:41:50 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:15:22.070 00:15:22.070 00:15:22.070 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.070 http://cunit.sourceforge.net/ 00:15:22.070 00:15:22.070 00:15:22.070 Suite: subsystem_suite 00:15:22.070 Test: subsystem_sort_test_depends_on_single ...passed 00:15:22.070 Test: subsystem_sort_test_depends_on_multiple ...passed 00:15:22.070 Test: subsystem_sort_test_missing_dependency ...[2024-07-15 09:41:50.068546] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:15:22.070 passed 00:15:22.070 00:15:22.070 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.070 suites 1 1 n/a 0 0 00:15:22.070 tests 3 3 3 0 0 00:15:22.070 asserts 20 20 20 0 n/a 00:15:22.070 00:15:22.070 Elapsed time = 0.000 seconds 00:15:22.070 [2024-07-15 09:41:50.068846] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:15:22.070 00:15:22.070 real 0m0.006s 00:15:22.070 user 0m0.000s 00:15:22.070 sys 0m0.008s 00:15:22.070 09:41:50 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.070 ************************************ 00:15:22.070 END TEST unittest_init 00:15:22.070 ************************************ 00:15:22.070 09:41:50 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:22.070 09:41:50 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 ************************************ 00:15:22.070 START TEST unittest_keyring 00:15:22.070 ************************************ 00:15:22.070 09:41:50 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:15:22.070 00:15:22.070 00:15:22.070 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.070 http://cunit.sourceforge.net/ 00:15:22.070 00:15:22.070 00:15:22.070 Suite: keyring 00:15:22.070 Test: test_keyring_add_remove ...[2024-07-15 09:41:50.116357] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:15:22.070 [2024-07-15 09:41:50.116606] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:15:22.070 passed 00:15:22.070 Test: test_keyring_get_put ...[2024-07-15 09:41:50.116629] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:22.070 passed 00:15:22.070 00:15:22.070 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.070 suites 1 1 n/a 0 0 00:15:22.070 tests 2 2 2 0 0 00:15:22.070 asserts 44 44 44 0 n/a 00:15:22.070 00:15:22.070 Elapsed time = 0.000 seconds 00:15:22.070 00:15:22.070 real 0m0.006s 00:15:22.070 user 0m0.005s 00:15:22.070 sys 0m0.005s 00:15:22.070 09:41:50 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.070 09:41:50 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 ************************************ 00:15:22.070 END TEST unittest_keyring 00:15:22.070 ************************************ 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:15:22.070 09:41:50 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:15:22.070 00:15:22.070 00:15:22.070 ===================== 00:15:22.070 All unit tests passed 00:15:22.070 ===================== 00:15:22.070 WARN: lcov not installed or SPDK built without coverage! 00:15:22.070 09:41:50 unittest -- unit/unittest.sh@305 -- # set +x 00:15:22.070 WARN: neither valgrind nor ASAN is enabled! 00:15:22.070 00:15:22.070 00:15:22.070 00:15:22.070 real 0m23.785s 00:15:22.070 user 0m21.016s 00:15:22.070 sys 0m1.661s 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.070 09:41:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:22.070 ************************************ 00:15:22.070 END TEST unittest 00:15:22.070 ************************************ 00:15:22.329 09:41:50 -- common/autotest_common.sh@1142 -- # return 0 00:15:22.329 09:41:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:15:22.329 09:41:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:22.329 09:41:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:22.329 09:41:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:15:22.329 09:41:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.329 09:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:22.329 09:41:50 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:15:22.329 09:41:50 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:22.329 09:41:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.329 09:41:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.329 09:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:22.329 ************************************ 00:15:22.329 START TEST env 00:15:22.329 ************************************ 00:15:22.329 09:41:50 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:22.329 * Looking for test storage... 00:15:22.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:15:22.329 09:41:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:22.329 09:41:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.329 09:41:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.329 09:41:50 env -- common/autotest_common.sh@10 -- # set +x 00:15:22.329 ************************************ 00:15:22.329 START TEST env_memory 00:15:22.329 ************************************ 00:15:22.329 09:41:50 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:22.329 00:15:22.329 00:15:22.329 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.329 http://cunit.sourceforge.net/ 00:15:22.329 00:15:22.329 00:15:22.329 Suite: memory 00:15:22.329 Test: alloc and free memory map ...[2024-07-15 09:41:50.403108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:22.329 passed 00:15:22.329 Test: mem map translation ...[2024-07-15 09:41:50.413373] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:22.329 [2024-07-15 09:41:50.413478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:22.329 [2024-07-15 09:41:50.413515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:22.329 [2024-07-15 09:41:50.413529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:22.587 passed 00:15:22.587 Test: mem map registration ...[2024-07-15 09:41:50.425470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:15:22.587 [2024-07-15 09:41:50.425548] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:15:22.587 passed 00:15:22.587 Test: mem map adjacent registrations ...passed 00:15:22.587 00:15:22.587 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.587 suites 1 1 n/a 0 0 00:15:22.587 tests 4 4 4 0 0 00:15:22.587 asserts 152 152 152 0 n/a 00:15:22.587 00:15:22.587 Elapsed time = 0.047 seconds 00:15:22.587 00:15:22.587 real 0m0.054s 00:15:22.587 user 0m0.043s 00:15:22.587 sys 0m0.011s 00:15:22.587 09:41:50 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.587 09:41:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:15:22.587 ************************************ 00:15:22.587 END TEST env_memory 00:15:22.587 ************************************ 00:15:22.588 09:41:50 env -- common/autotest_common.sh@1142 -- # return 0 00:15:22.588 09:41:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:22.588 09:41:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.588 09:41:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.588 09:41:50 env -- common/autotest_common.sh@10 -- # set +x 00:15:22.588 ************************************ 00:15:22.588 START TEST env_vtophys 00:15:22.588 ************************************ 00:15:22.588 09:41:50 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:22.588 EAL: lib.eal log level changed from notice to debug 00:15:22.588 EAL: Sysctl reports 10 cpus 00:15:22.588 EAL: Detected lcore 0 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 1 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 2 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 3 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 4 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 5 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 6 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 7 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 8 as core 0 on socket 0 00:15:22.588 EAL: Detected lcore 9 as core 0 on socket 0 00:15:22.588 EAL: Maximum logical cores by configuration: 128 00:15:22.588 EAL: Detected CPU lcores: 10 00:15:22.588 EAL: Detected NUMA nodes: 1 00:15:22.588 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:15:22.588 EAL: Checking presence of .so 'librte_eal.so.24' 00:15:22.588 EAL: Checking presence of .so 'librte_eal.so' 00:15:22.588 EAL: Detected static linkage of DPDK 00:15:22.588 EAL: No shared files mode enabled, IPC will be disabled 00:15:22.588 EAL: PCI scan found 10 devices 00:15:22.588 EAL: Specific IOVA mode is not requested, autodetecting 00:15:22.588 EAL: Selecting IOVA mode according to bus requests 00:15:22.588 EAL: Bus pci wants IOVA as 'PA' 00:15:22.588 EAL: Selected IOVA mode 'PA' 00:15:22.588 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:15:22.588 EAL: Ask a virtual area of 0x2e000 bytes 00:15:22.588 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000e6d000) not respected! 00:15:22.588 EAL: This may cause issues with mapping memory into secondary processes 00:15:22.588 EAL: Virtual area found at 0x1000e6d000 (size = 0x2e000) 00:15:22.588 EAL: Setting up physically contiguous memory... 00:15:22.588 EAL: Ask a virtual area of 0x1000 bytes 00:15:22.588 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1001cdd000) not respected! 00:15:22.588 EAL: This may cause issues with mapping memory into secondary processes 00:15:22.588 EAL: Virtual area found at 0x1001cdd000 (size = 0x1000) 00:15:22.588 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:15:22.588 EAL: Ask a virtual area of 0xf0000000 bytes 00:15:22.588 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:15:22.588 EAL: This may cause issues with mapping memory into secondary processes 00:15:22.588 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:15:22.588 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:15:22.588 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x20000000, len 268435456 00:15:22.846 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x30000000, len 268435456 00:15:22.847 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x40000000, len 268435456 00:15:22.847 EAL: Mapped memory segment 3 @ 0x10a0000000: physaddr:0x80000000, len 268435456 00:15:23.105 EAL: Mapped memory segment 4 @ 0x1090000000: physaddr:0x90000000, len 268435456 00:15:23.105 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x200000000, len 268435456 00:15:23.105 EAL: Mapped memory segment 6 @ 0x10d0000000: physaddr:0x2f0000000, len 268435456 00:15:23.364 EAL: Mapped memory segment 7 @ 0x10f0000000: physaddr:0x320000000, len 268435456 00:15:23.364 EAL: No shared files mode enabled, IPC is disabled 00:15:23.364 EAL: Added 1536M to heap on socket 0 00:15:23.364 EAL: Added 256M to heap on socket 0 00:15:23.364 EAL: Added 256M to heap on socket 0 00:15:23.364 EAL: TSC is not safe to use in SMP mode 00:15:23.364 EAL: TSC is not invariant 00:15:23.364 EAL: TSC frequency is ~2494140 KHz 00:15:23.364 EAL: Main lcore 0 is ready (tid=3d3a38212000;cpuset=[0]) 00:15:23.364 EAL: PCI scan found 10 devices 00:15:23.364 EAL: Registering mem event callbacks not supported 00:15:23.364 00:15:23.364 00:15:23.364 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.364 http://cunit.sourceforge.net/ 00:15:23.364 00:15:23.364 00:15:23.364 Suite: components_suite 00:15:23.364 Test: vtophys_malloc_test ...passed 00:15:23.931 Test: vtophys_spdk_malloc_test ...passed 00:15:23.931 00:15:23.931 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.931 suites 1 1 n/a 0 0 00:15:23.931 tests 2 2 2 0 0 00:15:23.931 asserts 521 521 521 0 n/a 00:15:23.931 00:15:23.931 Elapsed time = 0.578 seconds 00:15:23.931 00:15:23.931 real 0m1.440s 00:15:23.931 user 0m0.597s 00:15:23.931 sys 0m0.841s 00:15:23.931 09:41:51 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.931 09:41:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:23.931 ************************************ 00:15:23.931 END TEST env_vtophys 00:15:23.931 ************************************ 00:15:23.931 09:41:51 env -- common/autotest_common.sh@1142 -- # return 0 00:15:23.931 09:41:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:23.931 09:41:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:23.931 09:41:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.931 09:41:51 env -- common/autotest_common.sh@10 -- # set +x 00:15:23.931 ************************************ 00:15:23.931 START TEST env_pci 00:15:23.931 ************************************ 00:15:23.931 09:41:51 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:23.931 00:15:23.931 00:15:23.932 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.932 http://cunit.sourceforge.net/ 00:15:23.932 00:15:23.932 00:15:23.932 Suite: pci 00:15:23.932 Test: pci_hook ...passed 00:15:23.932 00:15:23.932 EAL: Cannot find device (10000:00:01.0) 00:15:23.932 EAL: Failed to attach device on primary process 00:15:23.932 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.932 suites 1 1 n/a 0 0 00:15:23.932 tests 1 1 1 0 0 00:15:23.932 asserts 25 25 25 0 n/a 00:15:23.932 00:15:23.932 Elapsed time = 0.000 seconds 00:15:23.932 00:15:23.932 real 0m0.008s 00:15:23.932 user 0m0.000s 00:15:23.932 sys 0m0.008s 00:15:23.932 09:41:51 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.932 09:41:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:23.932 ************************************ 00:15:23.932 END TEST env_pci 00:15:23.932 ************************************ 00:15:23.932 09:41:52 env -- common/autotest_common.sh@1142 -- # return 0 00:15:23.932 09:41:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:23.932 09:41:52 env -- env/env.sh@15 -- # uname 00:15:23.932 09:41:52 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:15:23.932 09:41:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:15:24.191 09:41:52 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:24.191 09:41:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.191 09:41:52 env -- common/autotest_common.sh@10 -- # set +x 00:15:24.191 ************************************ 00:15:24.191 START TEST env_dpdk_post_init 00:15:24.191 ************************************ 00:15:24.191 09:41:52 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:15:24.191 EAL: Sysctl reports 10 cpus 00:15:24.191 EAL: Detected CPU lcores: 10 00:15:24.191 EAL: Detected NUMA nodes: 1 00:15:24.191 EAL: Detected static linkage of DPDK 00:15:24.191 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:24.191 EAL: Selected IOVA mode 'PA' 00:15:24.191 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:15:24.191 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x20000000, len 268435456 00:15:24.191 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x30000000, len 268435456 00:15:24.451 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x40000000, len 268435456 00:15:24.451 EAL: Mapped memory segment 3 @ 0x10a0000000: physaddr:0x80000000, len 268435456 00:15:24.451 EAL: Mapped memory segment 4 @ 0x1090000000: physaddr:0x90000000, len 268435456 00:15:24.710 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x200000000, len 268435456 00:15:24.710 EAL: Mapped memory segment 6 @ 0x10d0000000: physaddr:0x2f0000000, len 268435456 00:15:24.710 EAL: Mapped memory segment 7 @ 0x10f0000000: physaddr:0x320000000, len 268435456 00:15:24.710 EAL: TSC is not safe to use in SMP mode 00:15:24.710 EAL: TSC is not invariant 00:15:24.710 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:24.710 [2024-07-15 09:41:52.774288] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:24.710 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:24.710 Starting DPDK initialization... 00:15:24.710 Starting SPDK post initialization... 00:15:24.710 SPDK NVMe probe 00:15:24.710 Attaching to 0000:00:10.0 00:15:24.710 Attached to 0000:00:10.0 00:15:24.710 Cleaning up... 00:15:24.969 00:15:24.969 real 0m0.789s 00:15:24.969 user 0m0.015s 00:15:24.969 sys 0m0.768s 00:15:24.969 09:41:52 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.969 09:41:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:24.969 ************************************ 00:15:24.969 END TEST env_dpdk_post_init 00:15:24.969 ************************************ 00:15:24.969 09:41:52 env -- common/autotest_common.sh@1142 -- # return 0 00:15:24.969 09:41:52 env -- env/env.sh@26 -- # uname 00:15:24.969 09:41:52 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:15:24.969 00:15:24.969 real 0m2.665s 00:15:24.969 user 0m0.799s 00:15:24.969 sys 0m1.913s 00:15:24.969 09:41:52 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.969 09:41:52 env -- common/autotest_common.sh@10 -- # set +x 00:15:24.969 ************************************ 00:15:24.969 END TEST env 00:15:24.969 ************************************ 00:15:24.969 09:41:52 -- common/autotest_common.sh@1142 -- # return 0 00:15:24.969 09:41:52 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:24.970 09:41:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:24.970 09:41:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.970 09:41:52 -- common/autotest_common.sh@10 -- # set +x 00:15:24.970 ************************************ 00:15:24.970 START TEST rpc 00:15:24.970 ************************************ 00:15:24.970 09:41:52 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:25.231 * Looking for test storage... 00:15:25.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:25.231 09:41:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45491 00:15:25.231 09:41:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:25.231 09:41:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45491 00:15:25.231 09:41:53 rpc -- common/autotest_common.sh@829 -- # '[' -z 45491 ']' 00:15:25.231 09:41:53 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.231 09:41:53 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.231 09:41:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:25.231 09:41:53 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.231 09:41:53 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.231 09:41:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.231 [2024-07-15 09:41:53.106547] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:25.231 [2024-07-15 09:41:53.106795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:25.798 EAL: TSC is not safe to use in SMP mode 00:15:25.798 EAL: TSC is not invariant 00:15:25.798 [2024-07-15 09:41:53.847397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.058 [2024-07-15 09:41:53.965661] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:26.058 [2024-07-15 09:41:53.968175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:26.058 [2024-07-15 09:41:53.968204] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45491' to capture a snapshot of events at runtime. 00:15:26.058 [2024-07-15 09:41:53.968224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.058 09:41:54 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.058 09:41:54 rpc -- common/autotest_common.sh@862 -- # return 0 00:15:26.058 09:41:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:26.058 09:41:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:26.058 09:41:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:26.058 09:41:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:26.058 09:41:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:26.058 09:41:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.058 09:41:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.058 ************************************ 00:15:26.058 START TEST rpc_integrity 00:15:26.058 ************************************ 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.058 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.058 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:26.058 { 00:15:26.058 "name": "Malloc0", 00:15:26.058 "aliases": [ 00:15:26.058 "7831c582-428e-11ef-a0af-c98d8ee52a94" 00:15:26.058 ], 00:15:26.058 "product_name": "Malloc disk", 00:15:26.058 "block_size": 512, 00:15:26.058 "num_blocks": 16384, 00:15:26.058 "uuid": "7831c582-428e-11ef-a0af-c98d8ee52a94", 00:15:26.058 "assigned_rate_limits": { 00:15:26.058 "rw_ios_per_sec": 0, 00:15:26.058 "rw_mbytes_per_sec": 0, 00:15:26.058 "r_mbytes_per_sec": 0, 00:15:26.058 "w_mbytes_per_sec": 0 00:15:26.058 }, 00:15:26.058 "claimed": false, 00:15:26.058 "zoned": false, 00:15:26.058 "supported_io_types": { 00:15:26.058 "read": true, 00:15:26.058 "write": true, 00:15:26.058 "unmap": true, 00:15:26.058 "flush": true, 00:15:26.058 "reset": true, 00:15:26.058 "nvme_admin": false, 00:15:26.058 "nvme_io": false, 00:15:26.058 "nvme_io_md": false, 00:15:26.058 "write_zeroes": true, 00:15:26.058 "zcopy": true, 00:15:26.058 "get_zone_info": false, 00:15:26.058 "zone_management": false, 00:15:26.058 "zone_append": false, 00:15:26.058 "compare": false, 00:15:26.058 "compare_and_write": false, 00:15:26.058 "abort": true, 00:15:26.058 "seek_hole": false, 00:15:26.058 "seek_data": false, 00:15:26.058 "copy": true, 00:15:26.058 "nvme_iov_md": false 00:15:26.059 }, 00:15:26.059 "memory_domains": [ 00:15:26.059 { 00:15:26.059 "dma_device_id": "system", 00:15:26.059 "dma_device_type": 1 00:15:26.059 }, 00:15:26.059 { 00:15:26.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.059 "dma_device_type": 2 00:15:26.059 } 00:15:26.059 ], 00:15:26.059 "driver_specific": {} 00:15:26.059 } 00:15:26.059 ]' 00:15:26.059 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:26.319 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:26.319 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:26.319 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.319 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.319 [2024-07-15 09:41:54.157130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:26.319 [2024-07-15 09:41:54.157187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.319 [2024-07-15 09:41:54.157742] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x290f89437a00 00:15:26.319 [2024-07-15 09:41:54.157763] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.319 [2024-07-15 09:41:54.158819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.319 [2024-07-15 09:41:54.158853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:26.319 Passthru0 00:15:26.319 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.319 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:26.319 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.319 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.319 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.319 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:26.319 { 00:15:26.319 "name": "Malloc0", 00:15:26.319 "aliases": [ 00:15:26.319 "7831c582-428e-11ef-a0af-c98d8ee52a94" 00:15:26.319 ], 00:15:26.319 "product_name": "Malloc disk", 00:15:26.319 "block_size": 512, 00:15:26.319 "num_blocks": 16384, 00:15:26.319 "uuid": "7831c582-428e-11ef-a0af-c98d8ee52a94", 00:15:26.319 "assigned_rate_limits": { 00:15:26.319 "rw_ios_per_sec": 0, 00:15:26.319 "rw_mbytes_per_sec": 0, 00:15:26.319 "r_mbytes_per_sec": 0, 00:15:26.319 "w_mbytes_per_sec": 0 00:15:26.319 }, 00:15:26.319 "claimed": true, 00:15:26.319 "claim_type": "exclusive_write", 00:15:26.319 "zoned": false, 00:15:26.319 "supported_io_types": { 00:15:26.319 "read": true, 00:15:26.319 "write": true, 00:15:26.319 "unmap": true, 00:15:26.319 "flush": true, 00:15:26.319 "reset": true, 00:15:26.319 "nvme_admin": false, 00:15:26.319 "nvme_io": false, 00:15:26.319 "nvme_io_md": false, 00:15:26.319 "write_zeroes": true, 00:15:26.319 "zcopy": true, 00:15:26.319 "get_zone_info": false, 00:15:26.319 "zone_management": false, 00:15:26.319 "zone_append": false, 00:15:26.319 "compare": false, 00:15:26.319 "compare_and_write": false, 00:15:26.319 "abort": true, 00:15:26.319 "seek_hole": false, 00:15:26.319 "seek_data": false, 00:15:26.319 "copy": true, 00:15:26.319 "nvme_iov_md": false 00:15:26.319 }, 00:15:26.319 "memory_domains": [ 00:15:26.319 { 00:15:26.319 "dma_device_id": "system", 00:15:26.319 "dma_device_type": 1 00:15:26.319 }, 00:15:26.319 { 00:15:26.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.319 "dma_device_type": 2 00:15:26.319 } 00:15:26.319 ], 00:15:26.319 "driver_specific": {} 00:15:26.319 }, 00:15:26.319 { 00:15:26.319 "name": "Passthru0", 00:15:26.319 "aliases": [ 00:15:26.319 "1e707f25-a57f-6d58-909e-19fa3e3fa924" 00:15:26.319 ], 00:15:26.319 "product_name": "passthru", 00:15:26.319 "block_size": 512, 00:15:26.319 "num_blocks": 16384, 00:15:26.319 "uuid": "1e707f25-a57f-6d58-909e-19fa3e3fa924", 00:15:26.319 "assigned_rate_limits": { 00:15:26.319 "rw_ios_per_sec": 0, 00:15:26.319 "rw_mbytes_per_sec": 0, 00:15:26.319 "r_mbytes_per_sec": 0, 00:15:26.319 "w_mbytes_per_sec": 0 00:15:26.319 }, 00:15:26.319 "claimed": false, 00:15:26.319 "zoned": false, 00:15:26.319 "supported_io_types": { 00:15:26.319 "read": true, 00:15:26.319 "write": true, 00:15:26.319 "unmap": true, 00:15:26.319 "flush": true, 00:15:26.319 "reset": true, 00:15:26.319 "nvme_admin": false, 00:15:26.319 "nvme_io": false, 00:15:26.319 "nvme_io_md": false, 00:15:26.319 "write_zeroes": true, 00:15:26.319 "zcopy": true, 00:15:26.319 "get_zone_info": false, 00:15:26.319 "zone_management": false, 00:15:26.319 "zone_append": false, 00:15:26.319 "compare": false, 00:15:26.319 "compare_and_write": false, 00:15:26.319 "abort": true, 00:15:26.319 "seek_hole": false, 00:15:26.319 "seek_data": false, 00:15:26.319 "copy": true, 00:15:26.319 "nvme_iov_md": false 00:15:26.319 }, 00:15:26.319 "memory_domains": [ 00:15:26.319 { 00:15:26.319 "dma_device_id": "system", 00:15:26.319 "dma_device_type": 1 00:15:26.319 }, 00:15:26.319 { 00:15:26.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.319 "dma_device_type": 2 00:15:26.319 } 00:15:26.319 ], 00:15:26.319 "driver_specific": { 00:15:26.319 "passthru": { 00:15:26.319 "name": "Passthru0", 00:15:26.319 "base_bdev_name": "Malloc0" 00:15:26.319 } 00:15:26.319 } 00:15:26.319 } 00:15:26.319 ]' 00:15:26.319 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:26.319 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:26.320 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:26.320 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:26.320 09:41:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:26.320 00:15:26.320 real 0m0.177s 00:15:26.320 user 0m0.045s 00:15:26.320 sys 0m0.064s 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 ************************************ 00:15:26.320 END TEST rpc_integrity 00:15:26.320 ************************************ 00:15:26.320 09:41:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:26.320 09:41:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:26.320 09:41:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:26.320 09:41:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.320 09:41:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 ************************************ 00:15:26.320 START TEST rpc_plugins 00:15:26.320 ************************************ 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:26.320 { 00:15:26.320 "name": "Malloc1", 00:15:26.320 "aliases": [ 00:15:26.320 "78521ea7-428e-11ef-a0af-c98d8ee52a94" 00:15:26.320 ], 00:15:26.320 "product_name": "Malloc disk", 00:15:26.320 "block_size": 4096, 00:15:26.320 "num_blocks": 256, 00:15:26.320 "uuid": "78521ea7-428e-11ef-a0af-c98d8ee52a94", 00:15:26.320 "assigned_rate_limits": { 00:15:26.320 "rw_ios_per_sec": 0, 00:15:26.320 "rw_mbytes_per_sec": 0, 00:15:26.320 "r_mbytes_per_sec": 0, 00:15:26.320 "w_mbytes_per_sec": 0 00:15:26.320 }, 00:15:26.320 "claimed": false, 00:15:26.320 "zoned": false, 00:15:26.320 "supported_io_types": { 00:15:26.320 "read": true, 00:15:26.320 "write": true, 00:15:26.320 "unmap": true, 00:15:26.320 "flush": true, 00:15:26.320 "reset": true, 00:15:26.320 "nvme_admin": false, 00:15:26.320 "nvme_io": false, 00:15:26.320 "nvme_io_md": false, 00:15:26.320 "write_zeroes": true, 00:15:26.320 "zcopy": true, 00:15:26.320 "get_zone_info": false, 00:15:26.320 "zone_management": false, 00:15:26.320 "zone_append": false, 00:15:26.320 "compare": false, 00:15:26.320 "compare_and_write": false, 00:15:26.320 "abort": true, 00:15:26.320 "seek_hole": false, 00:15:26.320 "seek_data": false, 00:15:26.320 "copy": true, 00:15:26.320 "nvme_iov_md": false 00:15:26.320 }, 00:15:26.320 "memory_domains": [ 00:15:26.320 { 00:15:26.320 "dma_device_id": "system", 00:15:26.320 "dma_device_type": 1 00:15:26.320 }, 00:15:26.320 { 00:15:26.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.320 "dma_device_type": 2 00:15:26.320 } 00:15:26.320 ], 00:15:26.320 "driver_specific": {} 00:15:26.320 } 00:15:26.320 ]' 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:26.320 09:41:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:26.320 00:15:26.320 real 0m0.092s 00:15:26.320 user 0m0.043s 00:15:26.320 sys 0m0.006s 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.320 ************************************ 00:15:26.320 END TEST rpc_plugins 00:15:26.320 ************************************ 00:15:26.320 09:41:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:26.580 09:41:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:26.580 09:41:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:26.580 09:41:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:26.580 09:41:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.580 09:41:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.580 ************************************ 00:15:26.580 START TEST rpc_trace_cmd_test 00:15:26.580 ************************************ 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.580 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:26.580 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45491", 00:15:26.580 "tpoint_group_mask": "0x8", 00:15:26.580 "iscsi_conn": { 00:15:26.580 "mask": "0x2", 00:15:26.580 "tpoint_mask": "0x0" 00:15:26.580 }, 00:15:26.580 "scsi": { 00:15:26.581 "mask": "0x4", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "bdev": { 00:15:26.581 "mask": "0x8", 00:15:26.581 "tpoint_mask": "0xffffffffffffffff" 00:15:26.581 }, 00:15:26.581 "nvmf_rdma": { 00:15:26.581 "mask": "0x10", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "nvmf_tcp": { 00:15:26.581 "mask": "0x20", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "blobfs": { 00:15:26.581 "mask": "0x80", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "dsa": { 00:15:26.581 "mask": "0x200", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "thread": { 00:15:26.581 "mask": "0x400", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "nvme_pcie": { 00:15:26.581 "mask": "0x800", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "iaa": { 00:15:26.581 "mask": "0x1000", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "nvme_tcp": { 00:15:26.581 "mask": "0x2000", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "bdev_nvme": { 00:15:26.581 "mask": "0x4000", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 }, 00:15:26.581 "sock": { 00:15:26.581 "mask": "0x8000", 00:15:26.581 "tpoint_mask": "0x0" 00:15:26.581 } 00:15:26.581 }' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:26.581 00:15:26.581 real 0m0.073s 00:15:26.581 user 0m0.038s 00:15:26.581 sys 0m0.026s 00:15:26.581 ************************************ 00:15:26.581 END TEST rpc_trace_cmd_test 00:15:26.581 ************************************ 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.581 09:41:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.581 09:41:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:26.581 09:41:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:26.581 09:41:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:26.581 09:41:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:26.581 09:41:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:26.581 09:41:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.581 09:41:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.581 ************************************ 00:15:26.581 START TEST rpc_daemon_integrity 00:15:26.581 ************************************ 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:26.581 { 00:15:26.581 "name": "Malloc2", 00:15:26.581 "aliases": [ 00:15:26.581 "787c3bfd-428e-11ef-a0af-c98d8ee52a94" 00:15:26.581 ], 00:15:26.581 "product_name": "Malloc disk", 00:15:26.581 "block_size": 512, 00:15:26.581 "num_blocks": 16384, 00:15:26.581 "uuid": "787c3bfd-428e-11ef-a0af-c98d8ee52a94", 00:15:26.581 "assigned_rate_limits": { 00:15:26.581 "rw_ios_per_sec": 0, 00:15:26.581 "rw_mbytes_per_sec": 0, 00:15:26.581 "r_mbytes_per_sec": 0, 00:15:26.581 "w_mbytes_per_sec": 0 00:15:26.581 }, 00:15:26.581 "claimed": false, 00:15:26.581 "zoned": false, 00:15:26.581 "supported_io_types": { 00:15:26.581 "read": true, 00:15:26.581 "write": true, 00:15:26.581 "unmap": true, 00:15:26.581 "flush": true, 00:15:26.581 "reset": true, 00:15:26.581 "nvme_admin": false, 00:15:26.581 "nvme_io": false, 00:15:26.581 "nvme_io_md": false, 00:15:26.581 "write_zeroes": true, 00:15:26.581 "zcopy": true, 00:15:26.581 "get_zone_info": false, 00:15:26.581 "zone_management": false, 00:15:26.581 "zone_append": false, 00:15:26.581 "compare": false, 00:15:26.581 "compare_and_write": false, 00:15:26.581 "abort": true, 00:15:26.581 "seek_hole": false, 00:15:26.581 "seek_data": false, 00:15:26.581 "copy": true, 00:15:26.581 "nvme_iov_md": false 00:15:26.581 }, 00:15:26.581 "memory_domains": [ 00:15:26.581 { 00:15:26.581 "dma_device_id": "system", 00:15:26.581 "dma_device_type": 1 00:15:26.581 }, 00:15:26.581 { 00:15:26.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.581 "dma_device_type": 2 00:15:26.581 } 00:15:26.581 ], 00:15:26.581 "driver_specific": {} 00:15:26.581 } 00:15:26.581 ]' 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.581 [2024-07-15 09:41:54.637154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:26.581 [2024-07-15 09:41:54.637202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.581 [2024-07-15 09:41:54.637237] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x290f89437a00 00:15:26.581 [2024-07-15 09:41:54.637243] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.581 [2024-07-15 09:41:54.637780] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.581 [2024-07-15 09:41:54.637806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:26.581 Passthru0 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.581 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.841 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.841 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:26.841 { 00:15:26.841 "name": "Malloc2", 00:15:26.841 "aliases": [ 00:15:26.841 "787c3bfd-428e-11ef-a0af-c98d8ee52a94" 00:15:26.841 ], 00:15:26.841 "product_name": "Malloc disk", 00:15:26.841 "block_size": 512, 00:15:26.841 "num_blocks": 16384, 00:15:26.841 "uuid": "787c3bfd-428e-11ef-a0af-c98d8ee52a94", 00:15:26.841 "assigned_rate_limits": { 00:15:26.841 "rw_ios_per_sec": 0, 00:15:26.841 "rw_mbytes_per_sec": 0, 00:15:26.841 "r_mbytes_per_sec": 0, 00:15:26.841 "w_mbytes_per_sec": 0 00:15:26.841 }, 00:15:26.841 "claimed": true, 00:15:26.841 "claim_type": "exclusive_write", 00:15:26.841 "zoned": false, 00:15:26.841 "supported_io_types": { 00:15:26.841 "read": true, 00:15:26.841 "write": true, 00:15:26.841 "unmap": true, 00:15:26.841 "flush": true, 00:15:26.841 "reset": true, 00:15:26.841 "nvme_admin": false, 00:15:26.841 "nvme_io": false, 00:15:26.841 "nvme_io_md": false, 00:15:26.841 "write_zeroes": true, 00:15:26.841 "zcopy": true, 00:15:26.841 "get_zone_info": false, 00:15:26.841 "zone_management": false, 00:15:26.841 "zone_append": false, 00:15:26.841 "compare": false, 00:15:26.841 "compare_and_write": false, 00:15:26.841 "abort": true, 00:15:26.841 "seek_hole": false, 00:15:26.841 "seek_data": false, 00:15:26.841 "copy": true, 00:15:26.841 "nvme_iov_md": false 00:15:26.841 }, 00:15:26.841 "memory_domains": [ 00:15:26.841 { 00:15:26.841 "dma_device_id": "system", 00:15:26.841 "dma_device_type": 1 00:15:26.841 }, 00:15:26.841 { 00:15:26.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.841 "dma_device_type": 2 00:15:26.841 } 00:15:26.841 ], 00:15:26.841 "driver_specific": {} 00:15:26.841 }, 00:15:26.841 { 00:15:26.841 "name": "Passthru0", 00:15:26.841 "aliases": [ 00:15:26.841 "806ee33f-090f-e953-bcb0-807e46fb8a2f" 00:15:26.841 ], 00:15:26.841 "product_name": "passthru", 00:15:26.841 "block_size": 512, 00:15:26.841 "num_blocks": 16384, 00:15:26.841 "uuid": "806ee33f-090f-e953-bcb0-807e46fb8a2f", 00:15:26.841 "assigned_rate_limits": { 00:15:26.841 "rw_ios_per_sec": 0, 00:15:26.841 "rw_mbytes_per_sec": 0, 00:15:26.841 "r_mbytes_per_sec": 0, 00:15:26.841 "w_mbytes_per_sec": 0 00:15:26.841 }, 00:15:26.841 "claimed": false, 00:15:26.841 "zoned": false, 00:15:26.841 "supported_io_types": { 00:15:26.841 "read": true, 00:15:26.841 "write": true, 00:15:26.841 "unmap": true, 00:15:26.841 "flush": true, 00:15:26.841 "reset": true, 00:15:26.841 "nvme_admin": false, 00:15:26.841 "nvme_io": false, 00:15:26.841 "nvme_io_md": false, 00:15:26.841 "write_zeroes": true, 00:15:26.841 "zcopy": true, 00:15:26.841 "get_zone_info": false, 00:15:26.842 "zone_management": false, 00:15:26.842 "zone_append": false, 00:15:26.842 "compare": false, 00:15:26.842 "compare_and_write": false, 00:15:26.842 "abort": true, 00:15:26.842 "seek_hole": false, 00:15:26.842 "seek_data": false, 00:15:26.842 "copy": true, 00:15:26.842 "nvme_iov_md": false 00:15:26.842 }, 00:15:26.842 "memory_domains": [ 00:15:26.842 { 00:15:26.842 "dma_device_id": "system", 00:15:26.842 "dma_device_type": 1 00:15:26.842 }, 00:15:26.842 { 00:15:26.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.842 "dma_device_type": 2 00:15:26.842 } 00:15:26.842 ], 00:15:26.842 "driver_specific": { 00:15:26.842 "passthru": { 00:15:26.842 "name": "Passthru0", 00:15:26.842 "base_bdev_name": "Malloc2" 00:15:26.842 } 00:15:26.842 } 00:15:26.842 } 00:15:26.842 ]' 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:26.842 00:15:26.842 real 0m0.160s 00:15:26.842 user 0m0.060s 00:15:26.842 sys 0m0.034s 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.842 09:41:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:26.842 ************************************ 00:15:26.842 END TEST rpc_daemon_integrity 00:15:26.842 ************************************ 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:26.842 09:41:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:26.842 09:41:54 rpc -- rpc/rpc.sh@84 -- # killprocess 45491 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@948 -- # '[' -z 45491 ']' 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@952 -- # kill -0 45491 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@953 -- # uname 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45491 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@956 -- # tail -1 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:26.842 killing process with pid 45491 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45491' 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@967 -- # kill 45491 00:15:26.842 09:41:54 rpc -- common/autotest_common.sh@972 -- # wait 45491 00:15:27.102 00:15:27.102 real 0m2.235s 00:15:27.102 user 0m2.050s 00:15:27.102 sys 0m1.207s 00:15:27.102 09:41:55 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.102 09:41:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 ************************************ 00:15:27.102 END TEST rpc 00:15:27.102 ************************************ 00:15:27.362 09:41:55 -- common/autotest_common.sh@1142 -- # return 0 00:15:27.362 09:41:55 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:27.362 09:41:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:27.362 09:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.362 09:41:55 -- common/autotest_common.sh@10 -- # set +x 00:15:27.362 ************************************ 00:15:27.362 START TEST skip_rpc 00:15:27.362 ************************************ 00:15:27.362 09:41:55 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:27.362 * Looking for test storage... 00:15:27.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:27.362 09:41:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:27.362 09:41:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:27.362 09:41:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:27.362 09:41:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:27.362 09:41:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.362 09:41:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.362 ************************************ 00:15:27.362 START TEST skip_rpc 00:15:27.362 ************************************ 00:15:27.362 09:41:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:15:27.362 09:41:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45669 00:15:27.362 09:41:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:27.362 09:41:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:27.362 09:41:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:27.362 [2024-07-15 09:41:55.425979] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:27.362 [2024-07-15 09:41:55.426308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:28.301 EAL: TSC is not safe to use in SMP mode 00:15:28.301 EAL: TSC is not invariant 00:15:28.301 [2024-07-15 09:41:56.128439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.301 [2024-07-15 09:41:56.246690] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:28.301 [2024-07-15 09:41:56.249453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45669 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45669 ']' 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45669 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45669 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:32.529 killing process with pid 45669 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45669' 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45669 00:15:32.529 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45669 00:15:32.792 00:15:32.792 real 0m5.413s 00:15:32.792 user 0m4.716s 00:15:32.792 sys 0m0.714s 00:15:32.792 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.792 ************************************ 00:15:32.792 END TEST skip_rpc 00:15:32.792 ************************************ 00:15:32.792 09:42:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.792 09:42:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:32.792 09:42:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:32.792 09:42:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:32.792 09:42:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.792 09:42:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.792 ************************************ 00:15:32.792 START TEST skip_rpc_with_json 00:15:32.792 ************************************ 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45714 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45714 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45714 ']' 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.792 09:42:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:32.792 [2024-07-15 09:42:00.882189] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:32.792 [2024-07-15 09:42:00.882419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:33.740 EAL: TSC is not safe to use in SMP mode 00:15:33.740 EAL: TSC is not invariant 00:15:33.740 [2024-07-15 09:42:01.613150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.740 [2024-07-15 09:42:01.729024] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:33.740 [2024-07-15 09:42:01.731505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:33.740 [2024-07-15 09:42:01.823542] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:33.740 request: 00:15:33.740 { 00:15:33.740 "trtype": "tcp", 00:15:33.740 "method": "nvmf_get_transports", 00:15:33.740 "req_id": 1 00:15:33.740 } 00:15:33.740 Got JSON-RPC error response 00:15:33.740 response: 00:15:33.740 { 00:15:33.740 "code": -19, 00:15:33.740 "message": "Operation not supported by device" 00:15:33.740 } 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.740 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 [2024-07-15 09:42:01.835571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.999 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.999 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:33.999 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.999 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.999 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:33.999 { 00:15:33.999 "subsystems": [ 00:15:33.999 { 00:15:33.999 "subsystem": "vmd", 00:15:33.999 "config": [] 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "subsystem": "iobuf", 00:15:33.999 "config": [ 00:15:33.999 { 00:15:33.999 "method": "iobuf_set_options", 00:15:33.999 "params": { 00:15:33.999 "small_pool_count": 8192, 00:15:33.999 "large_pool_count": 1024, 00:15:33.999 "small_bufsize": 8192, 00:15:33.999 "large_bufsize": 135168 00:15:33.999 } 00:15:33.999 } 00:15:33.999 ] 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "subsystem": "scheduler", 00:15:33.999 "config": [ 00:15:33.999 { 00:15:33.999 "method": "framework_set_scheduler", 00:15:33.999 "params": { 00:15:33.999 "name": "static" 00:15:33.999 } 00:15:33.999 } 00:15:33.999 ] 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "subsystem": "sock", 00:15:33.999 "config": [ 00:15:33.999 { 00:15:33.999 "method": "sock_set_default_impl", 00:15:33.999 "params": { 00:15:33.999 "impl_name": "posix" 00:15:33.999 } 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "method": "sock_impl_set_options", 00:15:33.999 "params": { 00:15:33.999 "impl_name": "ssl", 00:15:33.999 "recv_buf_size": 4096, 00:15:33.999 "send_buf_size": 4096, 00:15:33.999 "enable_recv_pipe": true, 00:15:33.999 "enable_quickack": false, 00:15:33.999 "enable_placement_id": 0, 00:15:33.999 "enable_zerocopy_send_server": true, 00:15:33.999 "enable_zerocopy_send_client": false, 00:15:33.999 "zerocopy_threshold": 0, 00:15:33.999 "tls_version": 0, 00:15:33.999 "enable_ktls": false 00:15:33.999 } 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "method": "sock_impl_set_options", 00:15:33.999 "params": { 00:15:33.999 "impl_name": "posix", 00:15:33.999 "recv_buf_size": 2097152, 00:15:33.999 "send_buf_size": 2097152, 00:15:33.999 "enable_recv_pipe": true, 00:15:33.999 "enable_quickack": false, 00:15:33.999 "enable_placement_id": 0, 00:15:33.999 "enable_zerocopy_send_server": true, 00:15:33.999 "enable_zerocopy_send_client": false, 00:15:33.999 "zerocopy_threshold": 0, 00:15:33.999 "tls_version": 0, 00:15:33.999 "enable_ktls": false 00:15:33.999 } 00:15:33.999 } 00:15:33.999 ] 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "subsystem": "keyring", 00:15:33.999 "config": [] 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "subsystem": "accel", 00:15:33.999 "config": [ 00:15:33.999 { 00:15:33.999 "method": "accel_set_options", 00:15:33.999 "params": { 00:15:33.999 "small_cache_size": 128, 00:15:33.999 "large_cache_size": 16, 00:15:33.999 "task_count": 2048, 00:15:33.999 "sequence_count": 2048, 00:15:33.999 "buf_count": 2048 00:15:33.999 } 00:15:33.999 } 00:15:33.999 ] 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "subsystem": "bdev", 00:15:33.999 "config": [ 00:15:33.999 { 00:15:33.999 "method": "bdev_set_options", 00:15:33.999 "params": { 00:15:33.999 "bdev_io_pool_size": 65535, 00:15:33.999 "bdev_io_cache_size": 256, 00:15:33.999 "bdev_auto_examine": true, 00:15:33.999 "iobuf_small_cache_size": 128, 00:15:33.999 "iobuf_large_cache_size": 16 00:15:33.999 } 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "method": "bdev_raid_set_options", 00:15:33.999 "params": { 00:15:33.999 "process_window_size_kb": 1024 00:15:33.999 } 00:15:33.999 }, 00:15:33.999 { 00:15:33.999 "method": "bdev_nvme_set_options", 00:15:33.999 "params": { 00:15:33.999 "action_on_timeout": "none", 00:15:33.999 "timeout_us": 0, 00:15:33.999 "timeout_admin_us": 0, 00:15:33.999 "keep_alive_timeout_ms": 10000, 00:15:33.999 "arbitration_burst": 0, 00:15:33.999 "low_priority_weight": 0, 00:15:33.999 "medium_priority_weight": 0, 00:15:33.999 "high_priority_weight": 0, 00:15:33.999 "nvme_adminq_poll_period_us": 10000, 00:15:33.999 "nvme_ioq_poll_period_us": 0, 00:15:33.999 "io_queue_requests": 0, 00:15:33.999 "delay_cmd_submit": true, 00:15:33.999 "transport_retry_count": 4, 00:15:33.999 "bdev_retry_count": 3, 00:15:33.999 "transport_ack_timeout": 0, 00:15:33.999 "ctrlr_loss_timeout_sec": 0, 00:15:33.999 "reconnect_delay_sec": 0, 00:15:33.999 "fast_io_fail_timeout_sec": 0, 00:15:33.999 "disable_auto_failback": false, 00:15:33.999 "generate_uuids": false, 00:15:33.999 "transport_tos": 0, 00:15:33.999 "nvme_error_stat": false, 00:15:33.999 "rdma_srq_size": 0, 00:15:33.999 "io_path_stat": false, 00:15:34.000 "allow_accel_sequence": false, 00:15:34.000 "rdma_max_cq_size": 0, 00:15:34.000 "rdma_cm_event_timeout_ms": 0, 00:15:34.000 "dhchap_digests": [ 00:15:34.000 "sha256", 00:15:34.000 "sha384", 00:15:34.000 "sha512" 00:15:34.000 ], 00:15:34.000 "dhchap_dhgroups": [ 00:15:34.000 "null", 00:15:34.000 "ffdhe2048", 00:15:34.000 "ffdhe3072", 00:15:34.000 "ffdhe4096", 00:15:34.000 "ffdhe6144", 00:15:34.000 "ffdhe8192" 00:15:34.000 ] 00:15:34.000 } 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "method": "bdev_nvme_set_hotplug", 00:15:34.000 "params": { 00:15:34.000 "period_us": 100000, 00:15:34.000 "enable": false 00:15:34.000 } 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "method": "bdev_wait_for_examine" 00:15:34.000 } 00:15:34.000 ] 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "subsystem": "scsi", 00:15:34.000 "config": null 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "subsystem": "nvmf", 00:15:34.000 "config": [ 00:15:34.000 { 00:15:34.000 "method": "nvmf_set_config", 00:15:34.000 "params": { 00:15:34.000 "discovery_filter": "match_any", 00:15:34.000 "admin_cmd_passthru": { 00:15:34.000 "identify_ctrlr": false 00:15:34.000 } 00:15:34.000 } 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "method": "nvmf_set_max_subsystems", 00:15:34.000 "params": { 00:15:34.000 "max_subsystems": 1024 00:15:34.000 } 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "method": "nvmf_set_crdt", 00:15:34.000 "params": { 00:15:34.000 "crdt1": 0, 00:15:34.000 "crdt2": 0, 00:15:34.000 "crdt3": 0 00:15:34.000 } 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "method": "nvmf_create_transport", 00:15:34.000 "params": { 00:15:34.000 "trtype": "TCP", 00:15:34.000 "max_queue_depth": 128, 00:15:34.000 "max_io_qpairs_per_ctrlr": 127, 00:15:34.000 "in_capsule_data_size": 4096, 00:15:34.000 "max_io_size": 131072, 00:15:34.000 "io_unit_size": 131072, 00:15:34.000 "max_aq_depth": 128, 00:15:34.000 "num_shared_buffers": 511, 00:15:34.000 "buf_cache_size": 4294967295, 00:15:34.000 "dif_insert_or_strip": false, 00:15:34.000 "zcopy": false, 00:15:34.000 "c2h_success": true, 00:15:34.000 "sock_priority": 0, 00:15:34.000 "abort_timeout_sec": 1, 00:15:34.000 "ack_timeout": 0, 00:15:34.000 "data_wr_pool_size": 0 00:15:34.000 } 00:15:34.000 } 00:15:34.000 ] 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "subsystem": "iscsi", 00:15:34.000 "config": [ 00:15:34.000 { 00:15:34.000 "method": "iscsi_set_options", 00:15:34.000 "params": { 00:15:34.000 "node_base": "iqn.2016-06.io.spdk", 00:15:34.000 "max_sessions": 128, 00:15:34.000 "max_connections_per_session": 2, 00:15:34.000 "max_queue_depth": 64, 00:15:34.000 "default_time2wait": 2, 00:15:34.000 "default_time2retain": 20, 00:15:34.000 "first_burst_length": 8192, 00:15:34.000 "immediate_data": true, 00:15:34.000 "allow_duplicated_isid": false, 00:15:34.000 "error_recovery_level": 0, 00:15:34.000 "nop_timeout": 60, 00:15:34.000 "nop_in_interval": 30, 00:15:34.000 "disable_chap": false, 00:15:34.000 "require_chap": false, 00:15:34.000 "mutual_chap": false, 00:15:34.000 "chap_group": 0, 00:15:34.000 "max_large_datain_per_connection": 64, 00:15:34.000 "max_r2t_per_connection": 4, 00:15:34.000 "pdu_pool_size": 36864, 00:15:34.000 "immediate_data_pool_size": 16384, 00:15:34.000 "data_out_pool_size": 2048 00:15:34.000 } 00:15:34.000 } 00:15:34.000 ] 00:15:34.000 } 00:15:34.000 ] 00:15:34.000 } 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45714 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45714 ']' 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45714 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45714 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:34.000 killing process with pid 45714 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45714' 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45714 00:15:34.000 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45714 00:15:34.259 09:42:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45728 00:15:34.259 09:42:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:34.259 09:42:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45728 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45728 ']' 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45728 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45728 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:40.827 killing process with pid 45728 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45728' 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45728 00:15:40.827 09:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45728 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:40.827 00:15:40.827 real 0m7.157s 00:15:40.827 user 0m6.074s 00:15:40.827 sys 0m1.563s 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 ************************************ 00:15:40.827 END TEST skip_rpc_with_json 00:15:40.827 ************************************ 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:40.827 09:42:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 ************************************ 00:15:40.827 START TEST skip_rpc_with_delay 00:15:40.827 ************************************ 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:40.827 [2024-07-15 09:42:08.098880] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:40.827 [2024-07-15 09:42:08.099293] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:40.827 00:15:40.827 real 0m0.019s 00:15:40.827 user 0m0.010s 00:15:40.827 sys 0m0.004s 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.827 09:42:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 ************************************ 00:15:40.827 END TEST skip_rpc_with_delay 00:15:40.827 ************************************ 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:40.827 09:42:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:40.827 09:42:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:15:40.827 09:42:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:40.827 00:15:40.827 real 0m12.936s 00:15:40.827 user 0m10.920s 00:15:40.827 sys 0m2.555s 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.827 ************************************ 00:15:40.827 09:42:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 END TEST skip_rpc 00:15:40.827 ************************************ 00:15:40.827 09:42:08 -- common/autotest_common.sh@1142 -- # return 0 00:15:40.827 09:42:08 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:40.827 09:42:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:40.827 09:42:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.827 09:42:08 -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 ************************************ 00:15:40.827 START TEST rpc_client 00:15:40.827 ************************************ 00:15:40.827 09:42:08 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:40.827 * Looking for test storage... 00:15:40.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:40.827 09:42:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:40.827 OK 00:15:40.827 09:42:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:40.827 00:15:40.827 real 0m0.189s 00:15:40.827 user 0m0.111s 00:15:40.827 sys 0m0.139s 00:15:40.827 09:42:08 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.827 09:42:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 ************************************ 00:15:40.827 END TEST rpc_client 00:15:40.827 ************************************ 00:15:40.827 09:42:08 -- common/autotest_common.sh@1142 -- # return 0 00:15:40.827 09:42:08 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:40.827 09:42:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:40.827 09:42:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.827 09:42:08 -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 ************************************ 00:15:40.827 START TEST json_config 00:15:40.827 ************************************ 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.827 09:42:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:40.827 09:42:08 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:15:40.827 09:42:08 json_config -- nvmf/common.sh@7 -- # return 0 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:40.827 INFO: JSON configuration test init 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:40.827 09:42:08 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:15:40.827 09:42:08 json_config -- json_config/common.sh@9 -- # local app=target 00:15:40.827 09:42:08 json_config -- json_config/common.sh@10 -- # shift 00:15:40.827 09:42:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:40.827 09:42:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:40.827 09:42:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:40.827 09:42:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:40.827 09:42:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:40.827 09:42:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45887 00:15:40.827 Waiting for target to run... 00:15:40.827 09:42:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:40.827 09:42:08 json_config -- json_config/common.sh@25 -- # waitforlisten 45887 /var/tmp/spdk_tgt.sock 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 45887 ']' 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.827 09:42:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:40.827 09:42:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:40.828 09:42:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.828 09:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:40.828 [2024-07-15 09:42:08.629029] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:40.828 [2024-07-15 09:42:08.629426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:41.086 EAL: TSC is not safe to use in SMP mode 00:15:41.086 EAL: TSC is not invariant 00:15:41.086 [2024-07-15 09:42:08.995088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.086 [2024-07-15 09:42:09.107041] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:41.086 [2024-07-15 09:42:09.109552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.655 09:42:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.655 09:42:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:15:41.655 00:15:41.655 09:42:09 json_config -- json_config/common.sh@26 -- # echo '' 00:15:41.655 09:42:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:15:41.655 09:42:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:15:41.655 09:42:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.655 09:42:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 09:42:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:15:41.655 09:42:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:15:41.655 09:42:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.655 09:42:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 09:42:09 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:41.655 09:42:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:15:41.655 09:42:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:41.914 [2024-07-15 09:42:09.837144] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:41.914 09:42:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:15:41.915 09:42:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:41.915 09:42:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.915 09:42:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.915 09:42:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:15:41.915 09:42:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:41.915 09:42:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:15:41.915 09:42:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:15:41.915 09:42:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:41.915 09:42:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@48 -- # local get_types 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:15:42.174 09:42:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.174 09:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@55 -- # return 0 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:15:42.174 09:42:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:42.174 09:42:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:42.174 09:42:10 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:15:42.175 09:42:10 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:15:42.175 09:42:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:15:42.435 09:42:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:15:42.435 09:42:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:42.435 09:42:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:42.435 09:42:10 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:15:42.435 09:42:10 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:15:42.435 09:42:10 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:15:42.435 09:42:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:15:42.695 Nvme0n1p0 Nvme0n1p1 00:15:42.695 09:42:10 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:15:42.695 09:42:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:15:42.695 [2024-07-15 09:42:10.754768] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:15:42.695 [2024-07-15 09:42:10.754833] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:15:42.695 00:15:42.695 09:42:10 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:15:42.695 09:42:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:15:42.953 Malloc3 00:15:42.953 09:42:10 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:15:42.953 09:42:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:15:43.211 [2024-07-15 09:42:11.122784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:43.211 [2024-07-15 09:42:11.122850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.211 [2024-07-15 09:42:11.122893] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x158eb1a38180 00:15:43.211 [2024-07-15 09:42:11.122910] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.211 [2024-07-15 09:42:11.123664] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.211 [2024-07-15 09:42:11.123697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:15:43.211 PTBdevFromMalloc3 00:15:43.211 09:42:11 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:15:43.211 09:42:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:15:43.468 Null0 00:15:43.468 09:42:11 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:15:43.468 09:42:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:15:43.468 Malloc0 00:15:43.468 09:42:11 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:15:43.468 09:42:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:15:43.725 Malloc1 00:15:43.725 09:42:11 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:15:43.726 09:42:11 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:15:43.984 102400+0 records in 00:15:43.984 102400+0 records out 00:15:43.984 104857600 bytes transferred in 0.300382 secs (349081372 bytes/sec) 00:15:43.984 09:42:12 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:15:43.984 09:42:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:15:44.242 aio_disk 00:15:44.243 09:42:12 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:15:44.243 09:42:12 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:15:44.243 09:42:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:15:44.500 8319c587-428e-11ef-a0af-c98d8ee52a94 00:15:44.500 09:42:12 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:15:44.500 09:42:12 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:15:44.500 09:42:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:15:44.758 09:42:12 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:15:44.758 09:42:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:15:44.758 09:42:12 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:15:44.758 09:42:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:15:45.015 09:42:13 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:15:45.015 09:42:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8338e6ee-428e-11ef-a0af-c98d8ee52a94 bdev_register:83563307-428e-11ef-a0af-c98d8ee52a94 bdev_register:837e7bca-428e-11ef-a0af-c98d8ee52a94 bdev_register:83995794-428e-11ef-a0af-c98d8ee52a94 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8338e6ee-428e-11ef-a0af-c98d8ee52a94 bdev_register:83563307-428e-11ef-a0af-c98d8ee52a94 bdev_register:837e7bca-428e-11ef-a0af-c98d8ee52a94 bdev_register:83995794-428e-11ef-a0af-c98d8ee52a94 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@71 -- # sort 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@72 -- # sort 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:15:45.295 09:42:13 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:15:45.295 09:42:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.563 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8338e6ee-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:83563307-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:837e7bca-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:83995794-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:8338e6ee-428e-11ef-a0af-c98d8ee52a94 bdev_register:83563307-428e-11ef-a0af-c98d8ee52a94 bdev_register:837e7bca-428e-11ef-a0af-c98d8ee52a94 bdev_register:83995794-428e-11ef-a0af-c98d8ee52a94 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\3\3\8\e\6\e\e\-\4\2\8\e\-\1\1\e\f\-\a\0\a\f\-\c\9\8\d\8\e\e\5\2\a\9\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\3\5\6\3\3\0\7\-\4\2\8\e\-\1\1\e\f\-\a\0\a\f\-\c\9\8\d\8\e\e\5\2\a\9\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\3\7\e\7\b\c\a\-\4\2\8\e\-\1\1\e\f\-\a\0\a\f\-\c\9\8\d\8\e\e\5\2\a\9\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\3\9\9\5\7\9\4\-\4\2\8\e\-\1\1\e\f\-\a\0\a\f\-\c\9\8\d\8\e\e\5\2\a\9\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@86 -- # cat 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:8338e6ee-428e-11ef-a0af-c98d8ee52a94 bdev_register:83563307-428e-11ef-a0af-c98d8ee52a94 bdev_register:837e7bca-428e-11ef-a0af-c98d8ee52a94 bdev_register:83995794-428e-11ef-a0af-c98d8ee52a94 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:15:45.564 Expected events matched: 00:15:45.564 bdev_register:8338e6ee-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 bdev_register:83563307-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 bdev_register:837e7bca-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 bdev_register:83995794-428e-11ef-a0af-c98d8ee52a94 00:15:45.564 bdev_register:Malloc0 00:15:45.564 bdev_register:Malloc0p0 00:15:45.564 bdev_register:Malloc0p1 00:15:45.564 bdev_register:Malloc0p2 00:15:45.564 bdev_register:Malloc1 00:15:45.564 bdev_register:Malloc3 00:15:45.564 bdev_register:Null0 00:15:45.564 bdev_register:Nvme0n1 00:15:45.564 bdev_register:Nvme0n1p0 00:15:45.564 bdev_register:Nvme0n1p1 00:15:45.564 bdev_register:PTBdevFromMalloc3 00:15:45.564 bdev_register:aio_disk 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:15:45.564 09:42:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.564 09:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:15:45.564 09:42:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.564 09:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:15:45.564 09:42:13 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:45.564 09:42:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:45.823 MallocBdevForConfigChangeCheck 00:15:45.823 09:42:13 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:15:45.823 09:42:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.823 09:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.823 09:42:13 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:15:45.823 09:42:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:46.081 INFO: shutting down applications... 00:15:46.081 09:42:14 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:15:46.081 09:42:14 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:15:46.081 09:42:14 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:15:46.081 09:42:14 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:15:46.081 09:42:14 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:46.340 [2024-07-15 09:42:14.242909] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:15:46.340 Calling clear_iscsi_subsystem 00:15:46.340 Calling clear_nvmf_subsystem 00:15:46.340 Calling clear_bdev_subsystem 00:15:46.340 09:42:14 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:46.340 09:42:14 json_config -- json_config/json_config.sh@343 -- # count=100 00:15:46.340 09:42:14 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:15:46.340 09:42:14 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:46.340 09:42:14 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:46.340 09:42:14 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:46.907 09:42:14 json_config -- json_config/json_config.sh@345 -- # break 00:15:46.907 09:42:14 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:15:46.907 09:42:14 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:15:46.907 09:42:14 json_config -- json_config/common.sh@31 -- # local app=target 00:15:46.907 09:42:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:46.907 09:42:14 json_config -- json_config/common.sh@35 -- # [[ -n 45887 ]] 00:15:46.907 09:42:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45887 00:15:46.907 09:42:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:46.907 09:42:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:46.907 09:42:14 json_config -- json_config/common.sh@41 -- # kill -0 45887 00:15:46.907 09:42:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:15:47.165 09:42:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:15:47.165 09:42:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:47.165 09:42:15 json_config -- json_config/common.sh@41 -- # kill -0 45887 00:15:47.165 09:42:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:47.165 09:42:15 json_config -- json_config/common.sh@43 -- # break 00:15:47.165 09:42:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:47.165 SPDK target shutdown done 00:15:47.165 09:42:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:47.165 INFO: relaunching applications... 00:15:47.165 09:42:15 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:15:47.165 09:42:15 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:47.165 09:42:15 json_config -- json_config/common.sh@9 -- # local app=target 00:15:47.165 09:42:15 json_config -- json_config/common.sh@10 -- # shift 00:15:47.165 09:42:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:47.165 09:42:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:47.165 09:42:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:47.165 09:42:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:47.165 09:42:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:47.165 09:42:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46069 00:15:47.165 Waiting for target to run... 00:15:47.165 09:42:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:47.165 09:42:15 json_config -- json_config/common.sh@25 -- # waitforlisten 46069 /var/tmp/spdk_tgt.sock 00:15:47.165 09:42:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:47.165 09:42:15 json_config -- common/autotest_common.sh@829 -- # '[' -z 46069 ']' 00:15:47.165 09:42:15 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:47.165 09:42:15 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:47.165 09:42:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:47.165 09:42:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.165 09:42:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:47.165 [2024-07-15 09:42:15.245851] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:47.165 [2024-07-15 09:42:15.246138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:47.733 EAL: TSC is not safe to use in SMP mode 00:15:47.733 EAL: TSC is not invariant 00:15:47.733 [2024-07-15 09:42:15.598567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.733 [2024-07-15 09:42:15.708414] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:47.733 [2024-07-15 09:42:15.710915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.992 [2024-07-15 09:42:15.860400] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:15:47.992 [2024-07-15 09:42:15.860457] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:15:47.992 [2024-07-15 09:42:15.868383] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:15:47.992 [2024-07-15 09:42:15.868404] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:15:47.992 [2024-07-15 09:42:15.876401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:47.992 [2024-07-15 09:42:15.876420] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:47.992 [2024-07-15 09:42:15.876427] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:47.992 [2024-07-15 09:42:15.884398] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:47.992 [2024-07-15 09:42:15.953462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:47.992 [2024-07-15 09:42:15.953512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.992 [2024-07-15 09:42:15.953520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x341802e37780 00:15:47.992 [2024-07-15 09:42:15.953527] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.992 [2024-07-15 09:42:15.953581] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.992 [2024-07-15 09:42:15.953589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:15:48.252 09:42:16 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.252 09:42:16 json_config -- common/autotest_common.sh@862 -- # return 0 00:15:48.252 00:15:48.252 09:42:16 json_config -- json_config/common.sh@26 -- # echo '' 00:15:48.252 INFO: Checking if target configuration is the same... 00:15:48.252 09:42:16 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:15:48.252 09:42:16 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:48.252 09:42:16 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.9xyHUT /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:48.252 + '[' 2 -ne 2 ']' 00:15:48.252 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:48.252 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:48.252 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:48.252 +++ basename /tmp//sh-np.9xyHUT 00:15:48.252 ++ mktemp /tmp/sh-np.9xyHUT.XXX 00:15:48.252 + tmp_file_1=/tmp/sh-np.9xyHUT.zPM 00:15:48.252 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:48.252 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:48.252 + tmp_file_2=/tmp/spdk_tgt_config.json.zNz 00:15:48.252 + ret=0 00:15:48.252 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:48.252 09:42:16 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:15:48.252 09:42:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:48.512 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:48.512 + diff -u /tmp/sh-np.9xyHUT.zPM /tmp/spdk_tgt_config.json.zNz 00:15:48.512 + echo 'INFO: JSON config files are the same' 00:15:48.512 INFO: JSON config files are the same 00:15:48.512 + rm /tmp/sh-np.9xyHUT.zPM /tmp/spdk_tgt_config.json.zNz 00:15:48.512 + exit 0 00:15:48.512 09:42:16 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:15:48.512 INFO: changing configuration and checking if this can be detected... 00:15:48.512 09:42:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:48.512 09:42:16 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:48.512 09:42:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:48.770 09:42:16 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.ndrfB5 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:48.770 + '[' 2 -ne 2 ']' 00:15:48.770 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:48.770 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:48.770 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:48.770 +++ basename /tmp//sh-np.ndrfB5 00:15:48.770 ++ mktemp /tmp/sh-np.ndrfB5.XXX 00:15:48.770 + tmp_file_1=/tmp/sh-np.ndrfB5.Nyv 00:15:48.770 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:48.770 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:48.770 + tmp_file_2=/tmp/spdk_tgt_config.json.4w1 00:15:48.770 + ret=0 00:15:48.770 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:48.770 09:42:16 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:15:48.770 09:42:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:49.338 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:49.338 + diff -u /tmp/sh-np.ndrfB5.Nyv /tmp/spdk_tgt_config.json.4w1 00:15:49.338 + ret=1 00:15:49.338 + echo '=== Start of file: /tmp/sh-np.ndrfB5.Nyv ===' 00:15:49.338 + cat /tmp/sh-np.ndrfB5.Nyv 00:15:49.338 + echo '=== End of file: /tmp/sh-np.ndrfB5.Nyv ===' 00:15:49.338 + echo '' 00:15:49.338 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4w1 ===' 00:15:49.338 + cat /tmp/spdk_tgt_config.json.4w1 00:15:49.338 + echo '=== End of file: /tmp/spdk_tgt_config.json.4w1 ===' 00:15:49.338 + echo '' 00:15:49.338 + rm /tmp/sh-np.ndrfB5.Nyv /tmp/spdk_tgt_config.json.4w1 00:15:49.338 + exit 1 00:15:49.338 INFO: configuration change detected. 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:15:49.339 09:42:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.339 09:42:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@317 -- # [[ -n 46069 ]] 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:15:49.339 09:42:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.339 09:42:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:15:49.339 09:42:17 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:15:49.339 09:42:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:15:49.598 09:42:17 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:15:49.598 09:42:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:15:49.598 09:42:17 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:15:49.598 09:42:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:15:49.857 09:42:17 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:15:49.857 09:42:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:15:50.116 09:42:18 json_config -- json_config/json_config.sh@193 -- # uname -s 00:15:50.116 09:42:18 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:15:50.116 09:42:18 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:15:50.116 09:42:18 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.116 09:42:18 json_config -- json_config/json_config.sh@323 -- # killprocess 46069 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@948 -- # '[' -z 46069 ']' 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@952 -- # kill -0 46069 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@953 -- # uname 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46069 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@956 -- # tail -1 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:50.116 killing process with pid 46069 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46069' 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@967 -- # kill 46069 00:15:50.116 09:42:18 json_config -- common/autotest_common.sh@972 -- # wait 46069 00:15:50.375 09:42:18 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:50.375 09:42:18 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:15:50.375 09:42:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.375 09:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.634 09:42:18 json_config -- json_config/json_config.sh@328 -- # return 0 00:15:50.634 INFO: Success 00:15:50.634 09:42:18 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:15:50.634 00:15:50.634 real 0m10.048s 00:15:50.634 user 0m14.962s 00:15:50.634 sys 0m2.372s 00:15:50.634 09:42:18 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.634 09:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.634 ************************************ 00:15:50.634 END TEST json_config 00:15:50.634 ************************************ 00:15:50.634 09:42:18 -- common/autotest_common.sh@1142 -- # return 0 00:15:50.634 09:42:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:50.634 09:42:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.634 09:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.634 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.634 ************************************ 00:15:50.634 START TEST json_config_extra_key 00:15:50.634 ************************************ 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.634 09:42:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:50.634 09:42:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:15:50.634 09:42:18 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:50.634 INFO: launching applications... 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:50.634 09:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46202 00:15:50.634 Waiting for target to run... 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46202 /var/tmp/spdk_tgt.sock 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46202 ']' 00:15:50.634 09:42:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.634 09:42:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:50.634 [2024-07-15 09:42:18.724460] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:50.634 [2024-07-15 09:42:18.724776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:51.202 EAL: TSC is not safe to use in SMP mode 00:15:51.202 EAL: TSC is not invariant 00:15:51.202 [2024-07-15 09:42:19.087609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.202 [2024-07-15 09:42:19.198132] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:51.202 [2024-07-15 09:42:19.200618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.771 09:42:19 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.771 09:42:19 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:15:51.771 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:51.771 INFO: shutting down applications... 00:15:51.771 09:42:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:51.771 09:42:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46202 ]] 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46202 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46202 00:15:51.771 09:42:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46202 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:52.339 SPDK target shutdown done 00:15:52.339 09:42:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:52.339 Success 00:15:52.339 09:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:52.339 00:15:52.339 real 0m1.651s 00:15:52.339 user 0m1.415s 00:15:52.339 sys 0m0.518s 00:15:52.339 09:42:20 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.339 09:42:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:52.339 ************************************ 00:15:52.339 END TEST json_config_extra_key 00:15:52.339 ************************************ 00:15:52.339 09:42:20 -- common/autotest_common.sh@1142 -- # return 0 00:15:52.339 09:42:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:52.339 09:42:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:52.339 09:42:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.339 09:42:20 -- common/autotest_common.sh@10 -- # set +x 00:15:52.339 ************************************ 00:15:52.339 START TEST alias_rpc 00:15:52.339 ************************************ 00:15:52.339 09:42:20 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:52.339 * Looking for test storage... 00:15:52.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:52.598 09:42:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:52.598 09:42:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.598 09:42:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46256 00:15:52.598 09:42:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46256 00:15:52.599 09:42:20 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46256 ']' 00:15:52.599 09:42:20 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.599 09:42:20 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.599 09:42:20 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.599 09:42:20 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.599 09:42:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.599 [2024-07-15 09:42:20.439777] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:52.599 [2024-07-15 09:42:20.440038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:53.166 EAL: TSC is not safe to use in SMP mode 00:15:53.166 EAL: TSC is not invariant 00:15:53.166 [2024-07-15 09:42:21.157601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.425 [2024-07-15 09:42:21.272335] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:53.425 [2024-07-15 09:42:21.274753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.425 09:42:21 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.425 09:42:21 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:53.425 09:42:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:53.685 09:42:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46256 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46256 ']' 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46256 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46256 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46256' 00:15:53.685 killing process with pid 46256 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@967 -- # kill 46256 00:15:53.685 09:42:21 alias_rpc -- common/autotest_common.sh@972 -- # wait 46256 00:15:53.945 00:15:53.945 real 0m1.742s 00:15:53.945 user 0m1.484s 00:15:53.945 sys 0m0.976s 00:15:53.945 09:42:21 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.945 09:42:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.945 ************************************ 00:15:53.945 END TEST alias_rpc 00:15:53.945 ************************************ 00:15:54.205 09:42:22 -- common/autotest_common.sh@1142 -- # return 0 00:15:54.205 09:42:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:15:54.205 09:42:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:54.205 09:42:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:54.205 09:42:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.205 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:15:54.205 ************************************ 00:15:54.205 START TEST spdkcli_tcp 00:15:54.205 ************************************ 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:15:54.205 * Looking for test storage... 00:15:54.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46321 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46321 00:15:54.205 09:42:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46321 ']' 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.205 09:42:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.205 [2024-07-15 09:42:22.250635] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:54.205 [2024-07-15 09:42:22.250956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:55.143 EAL: TSC is not safe to use in SMP mode 00:15:55.143 EAL: TSC is not invariant 00:15:55.143 [2024-07-15 09:42:22.943731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.143 [2024-07-15 09:42:23.048979] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:55.143 [2024-07-15 09:42:23.049038] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:15:55.143 [2024-07-15 09:42:23.096211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.143 [2024-07-15 09:42:23.096343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.402 09:42:23 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.402 09:42:23 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:15:55.402 09:42:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46329 00:15:55.402 09:42:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:15:55.402 09:42:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:15:55.402 [ 00:15:55.402 "spdk_get_version", 00:15:55.402 "rpc_get_methods", 00:15:55.402 "env_dpdk_get_mem_stats", 00:15:55.402 "trace_get_info", 00:15:55.402 "trace_get_tpoint_group_mask", 00:15:55.402 "trace_disable_tpoint_group", 00:15:55.402 "trace_enable_tpoint_group", 00:15:55.402 "trace_clear_tpoint_mask", 00:15:55.402 "trace_set_tpoint_mask", 00:15:55.402 "notify_get_notifications", 00:15:55.402 "notify_get_types", 00:15:55.402 "accel_get_stats", 00:15:55.402 "accel_set_options", 00:15:55.402 "accel_set_driver", 00:15:55.402 "accel_crypto_key_destroy", 00:15:55.402 "accel_crypto_keys_get", 00:15:55.402 "accel_crypto_key_create", 00:15:55.402 "accel_assign_opc", 00:15:55.402 "accel_get_module_info", 00:15:55.402 "accel_get_opc_assignments", 00:15:55.402 "bdev_get_histogram", 00:15:55.402 "bdev_enable_histogram", 00:15:55.402 "bdev_set_qos_limit", 00:15:55.402 "bdev_set_qd_sampling_period", 00:15:55.402 "bdev_get_bdevs", 00:15:55.402 "bdev_reset_iostat", 00:15:55.402 "bdev_get_iostat", 00:15:55.402 "bdev_examine", 00:15:55.402 "bdev_wait_for_examine", 00:15:55.402 "bdev_set_options", 00:15:55.402 "keyring_get_keys", 00:15:55.402 "framework_get_pci_devices", 00:15:55.402 "framework_get_config", 00:15:55.402 "framework_get_subsystems", 00:15:55.402 "sock_get_default_impl", 00:15:55.402 "sock_set_default_impl", 00:15:55.402 "sock_impl_set_options", 00:15:55.402 "sock_impl_get_options", 00:15:55.402 "thread_set_cpumask", 00:15:55.402 "framework_get_governor", 00:15:55.402 "framework_get_scheduler", 00:15:55.402 "framework_set_scheduler", 00:15:55.402 "framework_get_reactors", 00:15:55.402 "thread_get_io_channels", 00:15:55.402 "thread_get_pollers", 00:15:55.402 "thread_get_stats", 00:15:55.402 "framework_monitor_context_switch", 00:15:55.402 "spdk_kill_instance", 00:15:55.402 "log_enable_timestamps", 00:15:55.402 "log_get_flags", 00:15:55.402 "log_clear_flag", 00:15:55.402 "log_set_flag", 00:15:55.402 "log_get_level", 00:15:55.402 "log_set_level", 00:15:55.402 "log_get_print_level", 00:15:55.402 "log_set_print_level", 00:15:55.402 "framework_enable_cpumask_locks", 00:15:55.402 "framework_disable_cpumask_locks", 00:15:55.402 "framework_wait_init", 00:15:55.402 "framework_start_init", 00:15:55.402 "iobuf_get_stats", 00:15:55.402 "iobuf_set_options", 00:15:55.402 "vmd_rescan", 00:15:55.402 "vmd_remove_device", 00:15:55.402 "vmd_enable", 00:15:55.402 "nvmf_stop_mdns_prr", 00:15:55.402 "nvmf_publish_mdns_prr", 00:15:55.402 "nvmf_subsystem_get_listeners", 00:15:55.402 "nvmf_subsystem_get_qpairs", 00:15:55.402 "nvmf_subsystem_get_controllers", 00:15:55.402 "nvmf_get_stats", 00:15:55.402 "nvmf_get_transports", 00:15:55.402 "nvmf_create_transport", 00:15:55.402 "nvmf_get_targets", 00:15:55.402 "nvmf_delete_target", 00:15:55.402 "nvmf_create_target", 00:15:55.402 "nvmf_subsystem_allow_any_host", 00:15:55.402 "nvmf_subsystem_remove_host", 00:15:55.402 "nvmf_subsystem_add_host", 00:15:55.402 "nvmf_ns_remove_host", 00:15:55.402 "nvmf_ns_add_host", 00:15:55.402 "nvmf_subsystem_remove_ns", 00:15:55.402 "nvmf_subsystem_add_ns", 00:15:55.402 "nvmf_subsystem_listener_set_ana_state", 00:15:55.402 "nvmf_discovery_get_referrals", 00:15:55.402 "nvmf_discovery_remove_referral", 00:15:55.402 "nvmf_discovery_add_referral", 00:15:55.402 "nvmf_subsystem_remove_listener", 00:15:55.402 "nvmf_subsystem_add_listener", 00:15:55.402 "nvmf_delete_subsystem", 00:15:55.402 "nvmf_create_subsystem", 00:15:55.402 "nvmf_get_subsystems", 00:15:55.402 "nvmf_set_crdt", 00:15:55.402 "nvmf_set_config", 00:15:55.402 "nvmf_set_max_subsystems", 00:15:55.402 "scsi_get_devices", 00:15:55.402 "iscsi_get_histogram", 00:15:55.402 "iscsi_enable_histogram", 00:15:55.402 "iscsi_set_options", 00:15:55.402 "iscsi_get_auth_groups", 00:15:55.402 "iscsi_auth_group_remove_secret", 00:15:55.402 "iscsi_auth_group_add_secret", 00:15:55.402 "iscsi_delete_auth_group", 00:15:55.402 "iscsi_create_auth_group", 00:15:55.402 "iscsi_set_discovery_auth", 00:15:55.402 "iscsi_get_options", 00:15:55.402 "iscsi_target_node_request_logout", 00:15:55.402 "iscsi_target_node_set_redirect", 00:15:55.402 "iscsi_target_node_set_auth", 00:15:55.402 "iscsi_target_node_add_lun", 00:15:55.402 "iscsi_get_stats", 00:15:55.402 "iscsi_get_connections", 00:15:55.402 "iscsi_portal_group_set_auth", 00:15:55.402 "iscsi_start_portal_group", 00:15:55.402 "iscsi_delete_portal_group", 00:15:55.402 "iscsi_create_portal_group", 00:15:55.402 "iscsi_get_portal_groups", 00:15:55.402 "iscsi_delete_target_node", 00:15:55.402 "iscsi_target_node_remove_pg_ig_maps", 00:15:55.402 "iscsi_target_node_add_pg_ig_maps", 00:15:55.402 "iscsi_create_target_node", 00:15:55.402 "iscsi_get_target_nodes", 00:15:55.402 "iscsi_delete_initiator_group", 00:15:55.402 "iscsi_initiator_group_remove_initiators", 00:15:55.402 "iscsi_initiator_group_add_initiators", 00:15:55.402 "iscsi_create_initiator_group", 00:15:55.402 "iscsi_get_initiator_groups", 00:15:55.402 "keyring_file_remove_key", 00:15:55.402 "keyring_file_add_key", 00:15:55.402 "iaa_scan_accel_module", 00:15:55.402 "dsa_scan_accel_module", 00:15:55.402 "ioat_scan_accel_module", 00:15:55.402 "accel_error_inject_error", 00:15:55.402 "bdev_aio_delete", 00:15:55.402 "bdev_aio_rescan", 00:15:55.402 "bdev_aio_create", 00:15:55.402 "blobfs_create", 00:15:55.402 "blobfs_detect", 00:15:55.402 "blobfs_set_cache_size", 00:15:55.402 "bdev_zone_block_delete", 00:15:55.402 "bdev_zone_block_create", 00:15:55.402 "bdev_delay_delete", 00:15:55.402 "bdev_delay_create", 00:15:55.402 "bdev_delay_update_latency", 00:15:55.402 "bdev_split_delete", 00:15:55.402 "bdev_split_create", 00:15:55.402 "bdev_error_inject_error", 00:15:55.402 "bdev_error_delete", 00:15:55.402 "bdev_error_create", 00:15:55.402 "bdev_raid_set_options", 00:15:55.402 "bdev_raid_remove_base_bdev", 00:15:55.402 "bdev_raid_add_base_bdev", 00:15:55.402 "bdev_raid_delete", 00:15:55.402 "bdev_raid_create", 00:15:55.402 "bdev_raid_get_bdevs", 00:15:55.402 "bdev_lvol_set_parent_bdev", 00:15:55.402 "bdev_lvol_set_parent", 00:15:55.402 "bdev_lvol_check_shallow_copy", 00:15:55.402 "bdev_lvol_start_shallow_copy", 00:15:55.402 "bdev_lvol_grow_lvstore", 00:15:55.402 "bdev_lvol_get_lvols", 00:15:55.402 "bdev_lvol_get_lvstores", 00:15:55.402 "bdev_lvol_delete", 00:15:55.402 "bdev_lvol_set_read_only", 00:15:55.402 "bdev_lvol_resize", 00:15:55.402 "bdev_lvol_decouple_parent", 00:15:55.402 "bdev_lvol_inflate", 00:15:55.402 "bdev_lvol_rename", 00:15:55.402 "bdev_lvol_clone_bdev", 00:15:55.402 "bdev_lvol_clone", 00:15:55.402 "bdev_lvol_snapshot", 00:15:55.402 "bdev_lvol_create", 00:15:55.402 "bdev_lvol_delete_lvstore", 00:15:55.402 "bdev_lvol_rename_lvstore", 00:15:55.402 "bdev_lvol_create_lvstore", 00:15:55.402 "bdev_passthru_delete", 00:15:55.402 "bdev_passthru_create", 00:15:55.402 "bdev_nvme_send_cmd", 00:15:55.402 "bdev_nvme_get_path_iostat", 00:15:55.402 "bdev_nvme_get_mdns_discovery_info", 00:15:55.402 "bdev_nvme_stop_mdns_discovery", 00:15:55.402 "bdev_nvme_start_mdns_discovery", 00:15:55.402 "bdev_nvme_set_multipath_policy", 00:15:55.403 "bdev_nvme_set_preferred_path", 00:15:55.403 "bdev_nvme_get_io_paths", 00:15:55.403 "bdev_nvme_remove_error_injection", 00:15:55.403 "bdev_nvme_add_error_injection", 00:15:55.403 "bdev_nvme_get_discovery_info", 00:15:55.403 "bdev_nvme_stop_discovery", 00:15:55.403 "bdev_nvme_start_discovery", 00:15:55.403 "bdev_nvme_get_controller_health_info", 00:15:55.403 "bdev_nvme_disable_controller", 00:15:55.403 "bdev_nvme_enable_controller", 00:15:55.403 "bdev_nvme_reset_controller", 00:15:55.403 "bdev_nvme_get_transport_statistics", 00:15:55.403 "bdev_nvme_apply_firmware", 00:15:55.403 "bdev_nvme_detach_controller", 00:15:55.403 "bdev_nvme_get_controllers", 00:15:55.403 "bdev_nvme_attach_controller", 00:15:55.403 "bdev_nvme_set_hotplug", 00:15:55.403 "bdev_nvme_set_options", 00:15:55.403 "bdev_null_resize", 00:15:55.403 "bdev_null_delete", 00:15:55.403 "bdev_null_create", 00:15:55.403 "bdev_malloc_delete", 00:15:55.403 "bdev_malloc_create" 00:15:55.403 ] 00:15:55.403 09:42:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:15:55.403 09:42:23 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.403 09:42:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.662 09:42:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:55.662 09:42:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46321 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46321 ']' 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46321 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46321 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:55.662 killing process with pid 46321 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46321' 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46321 00:15:55.662 09:42:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46321 00:15:55.921 00:15:55.921 real 0m1.837s 00:15:55.921 user 0m2.098s 00:15:55.921 sys 0m1.037s 00:15:55.921 09:42:23 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.921 ************************************ 00:15:55.921 END TEST spdkcli_tcp 00:15:55.921 ************************************ 00:15:55.921 09:42:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.921 09:42:23 -- common/autotest_common.sh@1142 -- # return 0 00:15:55.921 09:42:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:55.921 09:42:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:55.921 09:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.921 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:15:55.921 ************************************ 00:15:55.921 START TEST dpdk_mem_utility 00:15:55.922 ************************************ 00:15:55.922 09:42:23 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:56.180 * Looking for test storage... 00:15:56.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:56.180 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:56.180 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46400 00:15:56.180 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46400 00:15:56.180 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46400 ']' 00:15:56.180 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.180 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.180 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.180 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.180 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.180 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:56.180 [2024-07-15 09:42:24.125814] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:56.181 [2024-07-15 09:42:24.126136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:56.749 EAL: TSC is not safe to use in SMP mode 00:15:56.749 EAL: TSC is not invariant 00:15:57.006 [2024-07-15 09:42:24.844294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.006 [2024-07-15 09:42:24.952231] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:57.006 [2024-07-15 09:42:24.954812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.006 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.006 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:15:57.006 09:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:57.006 09:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:57.006 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.006 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:57.006 { 00:15:57.006 "filename": "/tmp/spdk_mem_dump.txt" 00:15:57.006 } 00:15:57.006 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.006 09:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:57.006 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:15:57.006 1 heaps totaling size 2048.000000 MiB 00:15:57.006 size: 2048.000000 MiB heap id: 0 00:15:57.006 end heaps---------- 00:15:57.006 8 mempools totaling size 592.563660 MiB 00:15:57.006 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:15:57.006 size: 153.489014 MiB name: PDU_data_out_Pool 00:15:57.006 size: 84.500549 MiB name: bdev_io_46400 00:15:57.006 size: 51.008362 MiB name: evtpool_46400 00:15:57.006 size: 50.000549 MiB name: msgpool_46400 00:15:57.006 size: 21.758911 MiB name: PDU_Pool 00:15:57.006 size: 19.508911 MiB name: SCSI_TASK_Pool 00:15:57.006 size: 0.026123 MiB name: Session_Pool 00:15:57.006 end mempools------- 00:15:57.006 6 memzones totaling size 4.142822 MiB 00:15:57.006 size: 1.000366 MiB name: RG_ring_0_46400 00:15:57.006 size: 1.000366 MiB name: RG_ring_1_46400 00:15:57.006 size: 1.000366 MiB name: RG_ring_4_46400 00:15:57.006 size: 1.000366 MiB name: RG_ring_5_46400 00:15:57.006 size: 0.125366 MiB name: RG_ring_2_46400 00:15:57.006 size: 0.015991 MiB name: RG_ring_3_46400 00:15:57.006 end memzones------- 00:15:57.264 09:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:57.264 heap id: 0 total size: 2048.000000 MiB number of busy elements: 40 number of free elements: 4 00:15:57.264 list of free elements. size: 1254.071838 MiB 00:15:57.264 element at address: 0x1060000000 with size: 1172.537476 MiB 00:15:57.264 element at address: 0x10f0000000 with size: 70.694031 MiB 00:15:57.264 element at address: 0x10d0000000 with size: 10.714783 MiB 00:15:57.264 element at address: 0x10d2700000 with size: 0.125549 MiB 00:15:57.264 list of standard malloc elements. size: 197.218018 MiB 00:15:57.264 element at address: 0x10d7bfff80 with size: 132.000122 MiB 00:15:57.264 element at address: 0x10f58b5f80 with size: 64.000122 MiB 00:15:57.264 element at address: 0x10d25fff80 with size: 1.000122 MiB 00:15:57.264 element at address: 0x10fffd9f00 with size: 0.140747 MiB 00:15:57.264 element at address: 0x10d276fc80 with size: 0.062622 MiB 00:15:57.264 element at address: 0x10ffffdf80 with size: 0.007935 MiB 00:15:57.264 element at address: 0x10f98b6480 with size: 0.000305 MiB 00:15:57.264 element at address: 0x10d2720240 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d2720300 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d27203c0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d2720480 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d2720540 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d2727140 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d2727340 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d2727400 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d27274c0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d272f780 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d272f840 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d272f900 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10d276fbc0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6000 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b60c0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6180 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6240 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6300 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b63c0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b65c0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6680 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6880 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98b6940 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98d6c00 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f98d6cc0 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f99d6f80 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f9ad7240 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10f9ad7300 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10fccd7640 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10fccd7840 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10fccd7900 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10ffed7c40 with size: 0.000183 MiB 00:15:57.264 element at address: 0x10fffd9e40 with size: 0.000183 MiB 00:15:57.264 list of memzone associated elements. size: 596.710144 MiB 00:15:57.264 element at address: 0x10b2cfcac0 with size: 211.013000 MiB 00:15:57.264 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:15:57.264 element at address: 0x10a9489980 with size: 152.449524 MiB 00:15:57.264 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:15:57.264 element at address: 0x10d277fd00 with size: 84.000122 MiB 00:15:57.264 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46400_0 00:15:57.264 element at address: 0x10fccd79c0 with size: 48.000122 MiB 00:15:57.264 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46400_0 00:15:57.264 element at address: 0x10f9ad73c0 with size: 48.000122 MiB 00:15:57.264 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46400_0 00:15:57.264 element at address: 0x10d0f3d780 with size: 20.250671 MiB 00:15:57.264 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:15:57.264 element at address: 0x10f46b1ac0 with size: 18.000671 MiB 00:15:57.264 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:15:57.264 element at address: 0x10ffcd7a40 with size: 2.000488 MiB 00:15:57.264 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46400 00:15:57.264 element at address: 0x10fcad7440 with size: 2.000488 MiB 00:15:57.264 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46400 00:15:57.264 element at address: 0x10ffed7d00 with size: 1.008118 MiB 00:15:57.264 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46400 00:15:57.264 element at address: 0x10d23fdc40 with size: 1.008118 MiB 00:15:57.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:57.264 element at address: 0x10d0e3b640 with size: 1.008118 MiB 00:15:57.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:57.264 element at address: 0x10d0d39500 with size: 1.008118 MiB 00:15:57.264 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:57.264 element at address: 0x10d0c373c0 with size: 1.008118 MiB 00:15:57.264 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:57.264 element at address: 0x10f99d7040 with size: 1.000488 MiB 00:15:57.264 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46400 00:15:57.264 element at address: 0x10f98d6d80 with size: 1.000488 MiB 00:15:57.265 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46400 00:15:57.265 element at address: 0x10d24ffd80 with size: 1.000488 MiB 00:15:57.265 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46400 00:15:57.265 element at address: 0x10d0ab6fc0 with size: 1.000488 MiB 00:15:57.265 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46400 00:15:57.265 element at address: 0x10d7b7fd80 with size: 0.500488 MiB 00:15:57.265 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46400 00:15:57.265 element at address: 0x10d237da40 with size: 0.500488 MiB 00:15:57.265 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:57.265 element at address: 0x10d0bb71c0 with size: 0.500488 MiB 00:15:57.265 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:57.265 element at address: 0x10d272f9c0 with size: 0.250488 MiB 00:15:57.265 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:57.265 element at address: 0x10f98b6a00 with size: 0.125488 MiB 00:15:57.265 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46400 00:15:57.265 element at address: 0x10d2727580 with size: 0.031738 MiB 00:15:57.265 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:57.265 element at address: 0x10d2720600 with size: 0.023743 MiB 00:15:57.265 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:57.265 element at address: 0x10f58b1d80 with size: 0.016113 MiB 00:15:57.265 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46400 00:15:57.265 element at address: 0x10d2726740 with size: 0.002441 MiB 00:15:57.265 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:57.265 element at address: 0x10fccd7700 with size: 0.000305 MiB 00:15:57.265 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46400 00:15:57.265 element at address: 0x10f98b6740 with size: 0.000305 MiB 00:15:57.265 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46400 00:15:57.265 element at address: 0x10d2727200 with size: 0.000305 MiB 00:15:57.265 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:57.265 09:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:57.265 09:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46400 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46400 ']' 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46400 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46400 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:57.265 killing process with pid 46400 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46400' 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46400 00:15:57.265 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46400 00:15:57.523 00:15:57.523 real 0m1.595s 00:15:57.523 user 0m1.187s 00:15:57.523 sys 0m0.996s 00:15:57.523 ************************************ 00:15:57.523 END TEST dpdk_mem_utility 00:15:57.523 ************************************ 00:15:57.523 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.523 09:42:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:57.523 09:42:25 -- common/autotest_common.sh@1142 -- # return 0 00:15:57.523 09:42:25 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:57.523 09:42:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:57.523 09:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.523 09:42:25 -- common/autotest_common.sh@10 -- # set +x 00:15:57.523 ************************************ 00:15:57.523 START TEST event 00:15:57.523 ************************************ 00:15:57.523 09:42:25 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:57.783 * Looking for test storage... 00:15:57.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:57.783 09:42:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:57.783 09:42:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:15:57.783 09:42:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:57.783 09:42:25 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:15:57.783 09:42:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.783 09:42:25 event -- common/autotest_common.sh@10 -- # set +x 00:15:57.783 ************************************ 00:15:57.783 START TEST event_perf 00:15:57.783 ************************************ 00:15:57.783 09:42:25 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:57.783 Running I/O for 1 seconds...[2024-07-15 09:42:25.802287] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:57.783 [2024-07-15 09:42:25.802608] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:58.720 EAL: TSC is not safe to use in SMP mode 00:15:58.720 EAL: TSC is not invariant 00:15:58.720 [2024-07-15 09:42:26.501545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.720 [2024-07-15 09:42:26.614993] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:58.720 [2024-07-15 09:42:26.615057] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:15:58.720 [2024-07-15 09:42:26.615065] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:15:58.720 [2024-07-15 09:42:26.615072] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:15:58.720 [2024-07-15 09:42:26.619391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.720 [2024-07-15 09:42:26.619679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.720 Running I/O for 1 seconds...[2024-07-15 09:42:26.619562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.720 [2024-07-15 09:42:26.619674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.655 00:15:59.655 lcore 0: 2448379 00:15:59.655 lcore 1: 2448379 00:15:59.655 lcore 2: 2448374 00:15:59.655 lcore 3: 2448378 00:15:59.915 done. 00:15:59.915 00:15:59.915 real 0m1.983s 00:15:59.915 user 0m4.194s 00:15:59.915 sys 0m0.783s 00:15:59.915 09:42:27 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.915 09:42:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:59.915 ************************************ 00:15:59.915 END TEST event_perf 00:15:59.915 ************************************ 00:15:59.915 09:42:27 event -- common/autotest_common.sh@1142 -- # return 0 00:15:59.915 09:42:27 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:59.915 09:42:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:59.915 09:42:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.915 09:42:27 event -- common/autotest_common.sh@10 -- # set +x 00:15:59.915 ************************************ 00:15:59.916 START TEST event_reactor 00:15:59.916 ************************************ 00:15:59.916 09:42:27 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:59.916 [2024-07-15 09:42:27.840987] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:59.916 [2024-07-15 09:42:27.841311] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:00.483 EAL: TSC is not safe to use in SMP mode 00:16:00.483 EAL: TSC is not invariant 00:16:00.483 [2024-07-15 09:42:28.542993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.740 [2024-07-15 09:42:28.655894] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:00.740 [2024-07-15 09:42:28.658330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.116 test_start 00:16:02.116 oneshot 00:16:02.116 tick 100 00:16:02.116 tick 100 00:16:02.116 tick 250 00:16:02.116 tick 100 00:16:02.116 tick 100 00:16:02.116 tick 100 00:16:02.116 tick 250 00:16:02.116 tick 500 00:16:02.116 tick 100 00:16:02.116 tick 100 00:16:02.116 tick 250 00:16:02.116 tick 100 00:16:02.116 tick 100 00:16:02.116 test_end 00:16:02.116 00:16:02.116 real 0m1.986s 00:16:02.116 user 0m1.255s 00:16:02.116 sys 0m0.729s 00:16:02.116 09:42:29 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.116 ************************************ 00:16:02.116 END TEST event_reactor 00:16:02.116 ************************************ 00:16:02.116 09:42:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:02.116 09:42:29 event -- common/autotest_common.sh@1142 -- # return 0 00:16:02.116 09:42:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:02.116 09:42:29 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:02.116 09:42:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.116 09:42:29 event -- common/autotest_common.sh@10 -- # set +x 00:16:02.116 ************************************ 00:16:02.116 START TEST event_reactor_perf 00:16:02.116 ************************************ 00:16:02.116 09:42:29 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:02.116 [2024-07-15 09:42:29.884344] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:02.116 [2024-07-15 09:42:29.884671] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:02.685 EAL: TSC is not safe to use in SMP mode 00:16:02.685 EAL: TSC is not invariant 00:16:02.685 [2024-07-15 09:42:30.582111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.685 [2024-07-15 09:42:30.695526] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:02.685 [2024-07-15 09:42:30.698010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.068 test_start 00:16:04.068 test_end 00:16:04.068 Performance: 4558167 events per second 00:16:04.068 00:16:04.068 real 0m1.978s 00:16:04.068 user 0m1.236s 00:16:04.068 sys 0m0.742s 00:16:04.068 09:42:31 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.068 09:42:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:04.068 ************************************ 00:16:04.068 END TEST event_reactor_perf 00:16:04.068 ************************************ 00:16:04.068 09:42:31 event -- common/autotest_common.sh@1142 -- # return 0 00:16:04.068 09:42:31 event -- event/event.sh@49 -- # uname -s 00:16:04.068 09:42:31 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:16:04.068 00:16:04.068 real 0m6.310s 00:16:04.068 user 0m6.834s 00:16:04.068 sys 0m2.499s 00:16:04.068 09:42:31 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.068 09:42:31 event -- common/autotest_common.sh@10 -- # set +x 00:16:04.068 ************************************ 00:16:04.068 END TEST event 00:16:04.068 ************************************ 00:16:04.068 09:42:31 -- common/autotest_common.sh@1142 -- # return 0 00:16:04.068 09:42:31 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:04.068 09:42:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:04.068 09:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.068 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:16:04.068 ************************************ 00:16:04.068 START TEST thread 00:16:04.068 ************************************ 00:16:04.068 09:42:31 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:04.068 * Looking for test storage... 00:16:04.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:04.068 09:42:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:04.068 09:42:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:04.068 09:42:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.068 09:42:32 thread -- common/autotest_common.sh@10 -- # set +x 00:16:04.068 ************************************ 00:16:04.068 START TEST thread_poller_perf 00:16:04.068 ************************************ 00:16:04.068 09:42:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:04.328 [2024-07-15 09:42:32.167404] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:04.328 [2024-07-15 09:42:32.167723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:04.897 EAL: TSC is not safe to use in SMP mode 00:16:04.897 EAL: TSC is not invariant 00:16:04.897 [2024-07-15 09:42:32.872967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.897 [2024-07-15 09:42:32.985408] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:04.897 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:04.897 [2024-07-15 09:42:32.988283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.281 ====================================== 00:16:06.281 busy:2495993464 (cyc) 00:16:06.281 total_run_count: 7011000 00:16:06.281 tsc_hz: 2494140116 (cyc) 00:16:06.281 ====================================== 00:16:06.281 poller_cost: 356 (cyc), 142 (nsec) 00:16:06.281 00:16:06.281 real 0m1.988s 00:16:06.281 user 0m1.225s 00:16:06.281 sys 0m0.761s 00:16:06.281 09:42:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.281 09:42:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:06.281 ************************************ 00:16:06.281 END TEST thread_poller_perf 00:16:06.281 ************************************ 00:16:06.281 09:42:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:06.281 09:42:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:06.281 09:42:34 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:06.281 09:42:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.281 09:42:34 thread -- common/autotest_common.sh@10 -- # set +x 00:16:06.281 ************************************ 00:16:06.281 START TEST thread_poller_perf 00:16:06.281 ************************************ 00:16:06.281 09:42:34 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:06.281 [2024-07-15 09:42:34.207749] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:06.281 [2024-07-15 09:42:34.208072] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:06.850 EAL: TSC is not safe to use in SMP mode 00:16:06.850 EAL: TSC is not invariant 00:16:06.850 [2024-07-15 09:42:34.897896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.109 [2024-07-15 09:42:35.010130] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:07.109 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:07.109 [2024-07-15 09:42:35.012503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.529 ====================================== 00:16:08.529 busy:2495238912 (cyc) 00:16:08.529 total_run_count: 98567000 00:16:08.529 tsc_hz: 2494140116 (cyc) 00:16:08.529 ====================================== 00:16:08.529 poller_cost: 25 (cyc), 10 (nsec) 00:16:08.529 00:16:08.529 real 0m1.970s 00:16:08.529 user 0m1.243s 00:16:08.529 sys 0m0.724s 00:16:08.529 09:42:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.529 09:42:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:08.529 ************************************ 00:16:08.529 END TEST thread_poller_perf 00:16:08.529 ************************************ 00:16:08.529 09:42:36 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:08.529 09:42:36 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:16:08.529 09:42:36 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:16:08.529 09:42:36 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:08.529 09:42:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.529 09:42:36 thread -- common/autotest_common.sh@10 -- # set +x 00:16:08.529 ************************************ 00:16:08.529 START TEST thread_spdk_lock 00:16:08.529 ************************************ 00:16:08.529 09:42:36 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:16:08.529 [2024-07-15 09:42:36.230830] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:08.529 [2024-07-15 09:42:36.231139] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:09.121 EAL: TSC is not safe to use in SMP mode 00:16:09.121 EAL: TSC is not invariant 00:16:09.121 [2024-07-15 09:42:36.933814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.121 [2024-07-15 09:42:37.044860] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:09.121 [2024-07-15 09:42:37.044910] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:09.121 [2024-07-15 09:42:37.048024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.121 [2024-07-15 09:42:37.048020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.392 [2024-07-15 09:42:37.480348] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:09.392 [2024-07-15 09:42:37.480410] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:16:09.392 [2024-07-15 09:42:37.480418] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x3159e0 00:16:09.392 [2024-07-15 09:42:37.480865] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:09.392 [2024-07-15 09:42:37.480965] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:09.392 [2024-07-15 09:42:37.480972] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:09.659 Starting test contend 00:16:09.659 Worker Delay Wait us Hold us Total us 00:16:09.659 0 3 259120 160314 419434 00:16:09.659 1 5 161215 261541 422756 00:16:09.659 PASS test contend 00:16:09.659 Starting test hold_by_poller 00:16:09.659 PASS test hold_by_poller 00:16:09.659 Starting test hold_by_message 00:16:09.659 PASS test hold_by_message 00:16:09.659 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:16:09.659 100014 assertions passed 00:16:09.659 0 assertions failed 00:16:09.659 00:16:09.659 real 0m1.419s 00:16:09.659 user 0m1.098s 00:16:09.659 sys 0m0.754s 00:16:09.659 09:42:37 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.660 09:42:37 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:16:09.660 ************************************ 00:16:09.660 END TEST thread_spdk_lock 00:16:09.660 ************************************ 00:16:09.660 09:42:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:09.660 00:16:09.660 real 0m5.728s 00:16:09.660 user 0m3.773s 00:16:09.660 sys 0m2.420s 00:16:09.660 09:42:37 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.660 09:42:37 thread -- common/autotest_common.sh@10 -- # set +x 00:16:09.660 ************************************ 00:16:09.660 END TEST thread 00:16:09.660 ************************************ 00:16:09.660 09:42:37 -- common/autotest_common.sh@1142 -- # return 0 00:16:09.660 09:42:37 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:09.660 09:42:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:09.660 09:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.660 09:42:37 -- common/autotest_common.sh@10 -- # set +x 00:16:09.660 ************************************ 00:16:09.660 START TEST accel 00:16:09.660 ************************************ 00:16:09.660 09:42:37 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:09.926 * Looking for test storage... 00:16:09.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:09.926 09:42:37 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:09.926 09:42:37 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:16:09.926 09:42:37 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:09.926 09:42:37 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46708 00:16:09.926 09:42:37 accel -- accel/accel.sh@63 -- # waitforlisten 46708 00:16:09.926 09:42:37 accel -- common/autotest_common.sh@829 -- # '[' -z 46708 ']' 00:16:09.926 09:42:37 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.926 09:42:37 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.926 09:42:37 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.rnMkDL 00:16:09.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.926 09:42:37 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.926 09:42:37 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.926 09:42:37 accel -- common/autotest_common.sh@10 -- # set +x 00:16:09.926 [2024-07-15 09:42:37.888879] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:09.926 [2024-07-15 09:42:37.889043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:10.497 EAL: TSC is not safe to use in SMP mode 00:16:10.497 EAL: TSC is not invariant 00:16:10.497 [2024-07-15 09:42:38.585042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.755 [2024-07-15 09:42:38.698879] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:10.755 09:42:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:16:10.755 09:42:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:10.755 09:42:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:10.755 09:42:38 accel -- accel/accel.sh@41 -- # jq -r . 00:16:10.755 [2024-07-15 09:42:38.713898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.755 09:42:38 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.755 09:42:38 accel -- common/autotest_common.sh@862 -- # return 0 00:16:10.755 09:42:38 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:10.755 09:42:38 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:10.755 09:42:38 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:10.755 09:42:38 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.755 09:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:16:10.755 09:42:38 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.755 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.755 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.755 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:16:10.756 09:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:10.756 09:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:10.756 09:42:38 accel -- accel/accel.sh@75 -- # killprocess 46708 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@948 -- # '[' -z 46708 ']' 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@952 -- # kill -0 46708 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@953 -- # uname 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46708 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@956 -- # tail -1 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:16:10.756 killing process with pid 46708 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46708' 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@967 -- # kill 46708 00:16:10.756 09:42:38 accel -- common/autotest_common.sh@972 -- # wait 46708 00:16:11.324 09:42:39 accel -- accel/accel.sh@76 -- # trap - ERR 00:16:11.324 09:42:39 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:16:11.324 09:42:39 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:16:11.324 09:42:39 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.AO55DM -h 00:16:11.324 09:42:39 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.324 09:42:39 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:11.324 09:42:39 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.324 09:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:16:11.324 ************************************ 00:16:11.324 START TEST accel_missing_filename 00:16:11.324 ************************************ 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.324 09:42:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:16:11.324 09:42:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.EOqpQJ -t 1 -w compress 00:16:11.324 [2024-07-15 09:42:39.305947] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:11.324 [2024-07-15 09:42:39.306266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:12.261 EAL: TSC is not safe to use in SMP mode 00:16:12.261 EAL: TSC is not invariant 00:16:12.261 [2024-07-15 09:42:40.008429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.261 [2024-07-15 09:42:40.123011] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:16:12.261 09:42:40 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:16:12.261 [2024-07-15 09:42:40.136710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.261 [2024-07-15 09:42:40.139546] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.261 [2024-07-15 09:42:40.192975] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:16:12.261 A filename is required. 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:12.261 00:16:12.261 real 0m1.058s 00:16:12.261 user 0m0.301s 00:16:12.261 sys 0m0.757s 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.261 09:42:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:16:12.261 ************************************ 00:16:12.261 END TEST accel_missing_filename 00:16:12.261 ************************************ 00:16:12.520 09:42:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:12.520 09:42:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.520 09:42:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:12.520 09:42:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.520 09:42:40 accel -- common/autotest_common.sh@10 -- # set +x 00:16:12.520 ************************************ 00:16:12.520 START TEST accel_compress_verify 00:16:12.520 ************************************ 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.520 09:42:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.520 09:42:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NVjlyg -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.520 [2024-07-15 09:42:40.420604] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:12.520 [2024-07-15 09:42:40.420916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:13.090 EAL: TSC is not safe to use in SMP mode 00:16:13.090 EAL: TSC is not invariant 00:16:13.090 [2024-07-15 09:42:41.108023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.349 [2024-07-15 09:42:41.220520] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:13.349 09:42:41 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:16:13.349 [2024-07-15 09:42:41.234258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.349 [2024-07-15 09:42:41.236958] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.349 [2024-07-15 09:42:41.290066] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:16:13.609 00:16:13.609 Compression does not support the verify option, aborting. 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.609 00:16:13.609 real 0m1.039s 00:16:13.609 user 0m0.292s 00:16:13.609 sys 0m0.748s 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.609 09:42:41 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:16:13.609 ************************************ 00:16:13.609 END TEST accel_compress_verify 00:16:13.609 ************************************ 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:13.609 09:42:41 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:16:13.609 ************************************ 00:16:13.609 START TEST accel_wrong_workload 00:16:13.609 ************************************ 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:16:13.609 09:42:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6LFOSU -t 1 -w foobar 00:16:13.609 Unsupported workload type: foobar 00:16:13.609 [2024-07-15 09:42:41.514145] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:13.609 accel_perf options: 00:16:13.609 [-h help message] 00:16:13.609 [-q queue depth per core] 00:16:13.609 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:13.609 [-T number of threads per core 00:16:13.609 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:13.609 [-t time in seconds] 00:16:13.609 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:13.609 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:16:13.609 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:13.609 [-l for compress/decompress workloads, name of uncompressed input file 00:16:13.609 [-S for crc32c workload, use this seed value (default 0) 00:16:13.609 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:13.609 [-f for fill workload, use this BYTE value (default 255) 00:16:13.609 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:13.609 [-y verify result if this switch is on] 00:16:13.609 [-a tasks to allocate per core (default: same value as -q)] 00:16:13.609 Can be used to spread operations across a wider range of memory. 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.609 00:16:13.609 real 0m0.015s 00:16:13.609 user 0m0.009s 00:16:13.609 sys 0m0.008s 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.609 09:42:41 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:16:13.609 ************************************ 00:16:13.609 END TEST accel_wrong_workload 00:16:13.609 ************************************ 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:13.609 09:42:41 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.609 09:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:16:13.609 ************************************ 00:16:13.609 START TEST accel_negative_buffers 00:16:13.609 ************************************ 00:16:13.609 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:13.609 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:16:13.609 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:13.609 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:13.609 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.609 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:16:13.610 09:42:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Ct7QvU -t 1 -w xor -y -x -1 00:16:13.610 -x option must be non-negative. 00:16:13.610 [2024-07-15 09:42:41.586593] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:13.610 accel_perf options: 00:16:13.610 [-h help message] 00:16:13.610 [-q queue depth per core] 00:16:13.610 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:13.610 [-T number of threads per core 00:16:13.610 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:13.610 [-t time in seconds] 00:16:13.610 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:13.610 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:16:13.610 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:13.610 [-l for compress/decompress workloads, name of uncompressed input file 00:16:13.610 [-S for crc32c workload, use this seed value (default 0) 00:16:13.610 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:13.610 [-f for fill workload, use this BYTE value (default 255) 00:16:13.610 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:13.610 [-y verify result if this switch is on] 00:16:13.610 [-a tasks to allocate per core (default: same value as -q)] 00:16:13.610 Can be used to spread operations across a wider range of memory. 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.610 00:16:13.610 real 0m0.016s 00:16:13.610 user 0m0.009s 00:16:13.610 sys 0m0.007s 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.610 09:42:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:16:13.610 ************************************ 00:16:13.610 END TEST accel_negative_buffers 00:16:13.610 ************************************ 00:16:13.610 09:42:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:13.610 09:42:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:13.610 09:42:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:13.610 09:42:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.610 09:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:16:13.610 ************************************ 00:16:13.610 START TEST accel_crc32c 00:16:13.610 ************************************ 00:16:13.610 09:42:41 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:13.610 09:42:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:13.610 09:42:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:13.610 09:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:13.610 09:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:13.610 09:42:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:13.610 09:42:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xDbU1x -t 1 -w crc32c -S 32 -y 00:16:13.610 [2024-07-15 09:42:41.653869] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:13.610 [2024-07-15 09:42:41.654189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:14.547 EAL: TSC is not safe to use in SMP mode 00:16:14.547 EAL: TSC is not invariant 00:16:14.547 [2024-07-15 09:42:42.351339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.547 [2024-07-15 09:42:42.455402] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:14.547 09:42:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:14.548 [2024-07-15 09:42:42.470779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:14.548 09:42:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.927 09:42:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.927 09:42:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.927 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.927 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.927 09:42:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.927 09:42:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:15.928 09:42:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:15.928 00:16:15.928 real 0m2.050s 00:16:15.928 user 0m1.301s 00:16:15.928 sys 0m0.754s 00:16:15.928 09:42:43 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.928 09:42:43 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:15.928 ************************************ 00:16:15.928 END TEST accel_crc32c 00:16:15.928 ************************************ 00:16:15.928 09:42:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:15.928 09:42:43 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:15.928 09:42:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:15.928 09:42:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.928 09:42:43 accel -- common/autotest_common.sh@10 -- # set +x 00:16:15.928 ************************************ 00:16:15.928 START TEST accel_crc32c_C2 00:16:15.928 ************************************ 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:15.928 09:42:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pjNUGj -t 1 -w crc32c -y -C 2 00:16:15.928 [2024-07-15 09:42:43.763848] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:15.928 [2024-07-15 09:42:43.764174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:16.497 EAL: TSC is not safe to use in SMP mode 00:16:16.497 EAL: TSC is not invariant 00:16:16.497 [2024-07-15 09:42:44.472185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.497 [2024-07-15 09:42:44.585956] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:16.757 [2024-07-15 09:42:44.597490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.757 09:42:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:18.137 00:16:18.137 real 0m2.088s 00:16:18.137 user 0m1.346s 00:16:18.137 sys 0m0.761s 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:18.137 09:42:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:18.137 ************************************ 00:16:18.137 END TEST accel_crc32c_C2 00:16:18.137 ************************************ 00:16:18.137 09:42:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:18.137 09:42:45 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:18.137 09:42:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:18.137 09:42:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.137 09:42:45 accel -- common/autotest_common.sh@10 -- # set +x 00:16:18.137 ************************************ 00:16:18.137 START TEST accel_copy 00:16:18.137 ************************************ 00:16:18.137 09:42:45 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:16:18.137 09:42:45 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:18.137 09:42:45 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:16:18.137 09:42:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.137 09:42:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.137 09:42:45 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:18.137 09:42:45 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Se5rdO -t 1 -w copy -y 00:16:18.137 [2024-07-15 09:42:45.908878] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:18.137 [2024-07-15 09:42:45.909139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:18.704 EAL: TSC is not safe to use in SMP mode 00:16:18.704 EAL: TSC is not invariant 00:16:18.704 [2024-07-15 09:42:46.657221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.704 [2024-07-15 09:42:46.762770] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:18.704 09:42:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:16:18.705 [2024-07-15 09:42:46.777273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.705 09:42:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.082 09:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:20.082 09:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:20.082 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:20.082 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:20.083 09:42:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:20.083 00:16:20.083 real 0m2.104s 00:16:20.083 user 0m1.317s 00:16:20.083 sys 0m0.794s 00:16:20.083 09:42:47 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.083 09:42:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 ************************************ 00:16:20.083 END TEST accel_copy 00:16:20.083 ************************************ 00:16:20.083 09:42:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:20.083 09:42:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:20.083 09:42:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:20.083 09:42:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.083 09:42:48 accel -- common/autotest_common.sh@10 -- # set +x 00:16:20.083 ************************************ 00:16:20.083 START TEST accel_fill 00:16:20.083 ************************************ 00:16:20.083 09:42:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:20.083 09:42:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:16:20.083 09:42:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:16:20.083 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:20.083 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:20.083 09:42:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:20.083 09:42:48 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.3lddhF -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:20.083 [2024-07-15 09:42:48.066891] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:20.083 [2024-07-15 09:42:48.067220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:21.021 EAL: TSC is not safe to use in SMP mode 00:16:21.021 EAL: TSC is not invariant 00:16:21.021 [2024-07-15 09:42:48.759692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.021 [2024-07-15 09:42:48.871232] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:16:21.021 [2024-07-15 09:42:48.884311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.021 09:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:22.399 09:42:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:22.399 00:16:22.399 real 0m2.055s 00:16:22.399 user 0m1.295s 00:16:22.399 sys 0m0.770s 00:16:22.399 09:42:50 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.399 09:42:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:16:22.399 ************************************ 00:16:22.399 END TEST accel_fill 00:16:22.399 ************************************ 00:16:22.399 09:42:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:22.399 09:42:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:22.399 09:42:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:22.399 09:42:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.399 09:42:50 accel -- common/autotest_common.sh@10 -- # set +x 00:16:22.399 ************************************ 00:16:22.399 START TEST accel_copy_crc32c 00:16:22.399 ************************************ 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:22.400 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bPrSG4 -t 1 -w copy_crc32c -y 00:16:22.400 [2024-07-15 09:42:50.180795] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:22.400 [2024-07-15 09:42:50.181105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:22.969 EAL: TSC is not safe to use in SMP mode 00:16:22.969 EAL: TSC is not invariant 00:16:22.969 [2024-07-15 09:42:50.875457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.969 [2024-07-15 09:42:50.988415] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:22.969 09:42:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:22.969 [2024-07-15 09:42:51.002299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:16:22.969 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.970 09:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:24.345 00:16:24.345 real 0m2.055s 00:16:24.345 user 0m1.312s 00:16:24.345 sys 0m0.756s 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.345 ************************************ 00:16:24.345 END TEST accel_copy_crc32c 00:16:24.345 ************************************ 00:16:24.345 09:42:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:24.345 09:42:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:24.345 09:42:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:24.345 09:42:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:24.345 09:42:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.345 09:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:16:24.345 ************************************ 00:16:24.345 START TEST accel_copy_crc32c_C2 00:16:24.345 ************************************ 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.346 09:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tI49sP -t 1 -w copy_crc32c -y -C 2 00:16:24.346 [2024-07-15 09:42:52.292531] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:24.346 [2024-07-15 09:42:52.292851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:24.914 EAL: TSC is not safe to use in SMP mode 00:16:24.914 EAL: TSC is not invariant 00:16:24.914 [2024-07-15 09:42:52.981514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.174 [2024-07-15 09:42:53.093737] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:25.174 [2024-07-15 09:42:53.106713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:25.174 09:42:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:26.556 00:16:26.556 real 0m2.047s 00:16:26.556 user 0m1.332s 00:16:26.556 sys 0m0.729s 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.556 09:42:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:26.556 ************************************ 00:16:26.556 END TEST accel_copy_crc32c_C2 00:16:26.556 ************************************ 00:16:26.556 09:42:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:26.556 09:42:54 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:26.556 09:42:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:26.556 09:42:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.556 09:42:54 accel -- common/autotest_common.sh@10 -- # set +x 00:16:26.556 ************************************ 00:16:26.556 START TEST accel_dualcast 00:16:26.556 ************************************ 00:16:26.556 09:42:54 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:16:26.556 09:42:54 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:16:26.556 09:42:54 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:16:26.556 09:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:26.556 09:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:26.556 09:42:54 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:26.556 09:42:54 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Vz6NZo -t 1 -w dualcast -y 00:16:26.556 [2024-07-15 09:42:54.399813] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:26.556 [2024-07-15 09:42:54.400124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:27.140 EAL: TSC is not safe to use in SMP mode 00:16:27.140 EAL: TSC is not invariant 00:16:27.140 [2024-07-15 09:42:55.090342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.140 [2024-07-15 09:42:55.205385] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:16:27.140 [2024-07-15 09:42:55.220847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.140 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:27.400 09:42:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:28.780 09:42:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.780 00:16:28.780 real 0m2.051s 00:16:28.780 user 0m1.296s 00:16:28.780 sys 0m0.768s 00:16:28.780 09:42:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.780 09:42:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:16:28.780 ************************************ 00:16:28.780 END TEST accel_dualcast 00:16:28.780 ************************************ 00:16:28.780 09:42:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:28.780 09:42:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:28.780 09:42:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:28.780 09:42:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.780 09:42:56 accel -- common/autotest_common.sh@10 -- # set +x 00:16:28.780 ************************************ 00:16:28.780 START TEST accel_compare 00:16:28.780 ************************************ 00:16:28.780 09:42:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:16:28.780 09:42:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:16:28.780 09:42:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:16:28.780 09:42:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:28.780 09:42:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:28.780 09:42:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:28.780 09:42:56 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.0Ds33D -t 1 -w compare -y 00:16:28.780 [2024-07-15 09:42:56.511819] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:28.780 [2024-07-15 09:42:56.512137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:29.348 EAL: TSC is not safe to use in SMP mode 00:16:29.348 EAL: TSC is not invariant 00:16:29.348 [2024-07-15 09:42:57.211855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.348 [2024-07-15 09:42:57.320984] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:16:29.348 09:42:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:16:29.349 [2024-07-15 09:42:57.333151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:29.349 09:42:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:30.727 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:30.728 09:42:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.728 00:16:30.728 real 0m2.049s 00:16:30.728 user 0m1.318s 00:16:30.728 sys 0m0.739s 00:16:30.728 09:42:58 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:30.728 ************************************ 00:16:30.728 END TEST accel_compare 00:16:30.728 ************************************ 00:16:30.728 09:42:58 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:16:30.728 09:42:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:30.728 09:42:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:30.728 09:42:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:30.728 09:42:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.728 09:42:58 accel -- common/autotest_common.sh@10 -- # set +x 00:16:30.728 ************************************ 00:16:30.728 START TEST accel_xor 00:16:30.728 ************************************ 00:16:30.728 09:42:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:16:30.728 09:42:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:30.728 09:42:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:30.728 09:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.728 09:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.728 09:42:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:30.728 09:42:58 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.I0VeDH -t 1 -w xor -y 00:16:30.728 [2024-07-15 09:42:58.621566] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:30.728 [2024-07-15 09:42:58.621880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:31.298 EAL: TSC is not safe to use in SMP mode 00:16:31.298 EAL: TSC is not invariant 00:16:31.298 [2024-07-15 09:42:59.318243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.558 [2024-07-15 09:42:59.430596] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:31.558 [2024-07-15 09:42:59.444098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.558 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:31.559 09:42:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:32.940 00:16:32.940 real 0m2.054s 00:16:32.940 user 0m1.306s 00:16:32.940 sys 0m0.757s 00:16:32.940 09:43:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:32.940 09:43:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:16:32.940 ************************************ 00:16:32.940 END TEST accel_xor 00:16:32.940 ************************************ 00:16:32.940 09:43:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:32.940 09:43:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:32.940 09:43:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:32.940 09:43:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.940 09:43:00 accel -- common/autotest_common.sh@10 -- # set +x 00:16:32.940 ************************************ 00:16:32.940 START TEST accel_xor 00:16:32.940 ************************************ 00:16:32.940 09:43:00 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:32.940 09:43:00 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Z26wjH -t 1 -w xor -y -x 3 00:16:32.940 [2024-07-15 09:43:00.731724] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:32.940 [2024-07-15 09:43:00.732031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:33.507 EAL: TSC is not safe to use in SMP mode 00:16:33.507 EAL: TSC is not invariant 00:16:33.507 [2024-07-15 09:43:01.433036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.507 [2024-07-15 09:43:01.546970] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:33.507 [2024-07-15 09:43:01.560624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.507 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:33.508 09:43:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:34.888 09:43:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:34.888 00:16:34.888 real 0m2.062s 00:16:34.888 user 0m1.311s 00:16:34.888 sys 0m0.762s 00:16:34.888 09:43:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.888 ************************************ 00:16:34.888 END TEST accel_xor 00:16:34.888 ************************************ 00:16:34.888 09:43:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 09:43:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:34.888 09:43:02 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:34.888 09:43:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:34.888 09:43:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.888 09:43:02 accel -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 ************************************ 00:16:34.888 START TEST accel_dif_verify 00:16:34.888 ************************************ 00:16:34.888 09:43:02 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:16:34.888 09:43:02 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:16:34.888 09:43:02 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:16:34.888 09:43:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:34.888 09:43:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:34.888 09:43:02 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:34.888 09:43:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VDFZlk -t 1 -w dif_verify 00:16:34.888 [2024-07-15 09:43:02.852970] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:34.888 [2024-07-15 09:43:02.853312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:35.484 EAL: TSC is not safe to use in SMP mode 00:16:35.484 EAL: TSC is not invariant 00:16:35.484 [2024-07-15 09:43:03.556281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.747 [2024-07-15 09:43:03.661524] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:16:35.747 [2024-07-15 09:43:03.674698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:35.747 09:43:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.129 09:43:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:37.129 09:43:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:37.129 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:37.130 09:43:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:37.130 00:16:37.130 real 0m2.054s 00:16:37.130 user 0m1.305s 00:16:37.130 sys 0m0.763s 00:16:37.130 09:43:04 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.130 09:43:04 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:16:37.130 ************************************ 00:16:37.130 END TEST accel_dif_verify 00:16:37.130 ************************************ 00:16:37.130 09:43:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:37.130 09:43:04 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:37.130 09:43:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:37.130 09:43:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.130 09:43:04 accel -- common/autotest_common.sh@10 -- # set +x 00:16:37.130 ************************************ 00:16:37.130 START TEST accel_dif_generate 00:16:37.130 ************************************ 00:16:37.130 09:43:04 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:16:37.130 09:43:04 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:16:37.130 09:43:04 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:16:37.130 09:43:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.130 09:43:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.130 09:43:04 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:37.130 09:43:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.UqU7M5 -t 1 -w dif_generate 00:16:37.130 [2024-07-15 09:43:04.960461] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:37.130 [2024-07-15 09:43:04.960842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:37.700 EAL: TSC is not safe to use in SMP mode 00:16:37.700 EAL: TSC is not invariant 00:16:37.700 [2024-07-15 09:43:05.681174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.966 [2024-07-15 09:43:05.798378] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:16:37.966 [2024-07-15 09:43:05.811562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.966 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:37.967 09:43:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:38.991 09:43:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:38.991 00:16:38.991 real 0m2.089s 00:16:38.991 user 0m1.320s 00:16:38.991 sys 0m0.783s 00:16:38.991 09:43:07 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:38.991 ************************************ 00:16:38.991 END TEST accel_dif_generate 00:16:38.991 ************************************ 00:16:38.991 09:43:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:16:39.264 09:43:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:39.264 09:43:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:39.264 09:43:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:39.264 09:43:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.264 09:43:07 accel -- common/autotest_common.sh@10 -- # set +x 00:16:39.264 ************************************ 00:16:39.264 START TEST accel_dif_generate_copy 00:16:39.264 ************************************ 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:39.264 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bZDM98 -t 1 -w dif_generate_copy 00:16:39.264 [2024-07-15 09:43:07.097425] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:39.264 [2024-07-15 09:43:07.097746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:39.834 EAL: TSC is not safe to use in SMP mode 00:16:39.834 EAL: TSC is not invariant 00:16:39.834 [2024-07-15 09:43:07.825103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.094 [2024-07-15 09:43:07.938449] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:16:40.094 [2024-07-15 09:43:07.952261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:40.094 09:43:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.469 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:41.470 00:16:41.470 real 0m2.089s 00:16:41.470 user 0m1.320s 00:16:41.470 sys 0m0.781s 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.470 09:43:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:16:41.470 ************************************ 00:16:41.470 END TEST accel_dif_generate_copy 00:16:41.470 ************************************ 00:16:41.470 09:43:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:41.470 09:43:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:16:41.470 09:43:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:41.470 09:43:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:41.470 09:43:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.470 09:43:09 accel -- common/autotest_common.sh@10 -- # set +x 00:16:41.470 ************************************ 00:16:41.470 START TEST accel_comp 00:16:41.470 ************************************ 00:16:41.470 09:43:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:41.470 09:43:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:16:41.470 09:43:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:16:41.470 09:43:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:41.470 09:43:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:41.470 09:43:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:41.470 09:43:09 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.onc0Ox -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:41.470 [2024-07-15 09:43:09.239595] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:41.470 [2024-07-15 09:43:09.239930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:42.036 EAL: TSC is not safe to use in SMP mode 00:16:42.036 EAL: TSC is not invariant 00:16:42.036 [2024-07-15 09:43:09.945098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.036 [2024-07-15 09:43:10.063789] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:16:42.036 [2024-07-15 09:43:10.078017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:42.036 09:43:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:16:43.415 09:43:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.415 00:16:43.415 real 0m2.073s 00:16:43.415 user 0m1.329s 00:16:43.415 sys 0m0.758s 00:16:43.415 09:43:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.415 09:43:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:16:43.415 ************************************ 00:16:43.415 END TEST accel_comp 00:16:43.415 ************************************ 00:16:43.415 09:43:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:43.415 09:43:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.415 09:43:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:43.415 09:43:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.415 09:43:11 accel -- common/autotest_common.sh@10 -- # set +x 00:16:43.415 ************************************ 00:16:43.415 START TEST accel_decomp 00:16:43.415 ************************************ 00:16:43.415 09:43:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.415 09:43:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:16:43.415 09:43:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:16:43.416 09:43:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:43.416 09:43:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:43.416 09:43:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.416 09:43:11 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DZF1P9 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.416 [2024-07-15 09:43:11.369919] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:43.416 [2024-07-15 09:43:11.370232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:43.983 EAL: TSC is not safe to use in SMP mode 00:16:43.983 EAL: TSC is not invariant 00:16:44.242 [2024-07-15 09:43:12.081172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.242 [2024-07-15 09:43:12.194239] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:16:44.242 [2024-07-15 09:43:12.207768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:44.242 09:43:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:45.688 09:43:13 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:45.688 00:16:45.688 real 0m2.072s 00:16:45.688 user 0m1.329s 00:16:45.688 sys 0m0.757s 00:16:45.688 09:43:13 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.689 09:43:13 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:16:45.689 ************************************ 00:16:45.689 END TEST accel_decomp 00:16:45.689 ************************************ 00:16:45.689 09:43:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:45.689 09:43:13 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:45.689 09:43:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:45.689 09:43:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.689 09:43:13 accel -- common/autotest_common.sh@10 -- # set +x 00:16:45.689 ************************************ 00:16:45.689 START TEST accel_decomp_full 00:16:45.689 ************************************ 00:16:45.689 09:43:13 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:45.689 09:43:13 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:16:45.689 09:43:13 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:16:45.689 09:43:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:45.689 09:43:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:45.689 09:43:13 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:45.689 09:43:13 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.HlKcxU -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:45.689 [2024-07-15 09:43:13.498528] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:45.689 [2024-07-15 09:43:13.498843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:46.258 EAL: TSC is not safe to use in SMP mode 00:16:46.258 EAL: TSC is not invariant 00:16:46.258 [2024-07-15 09:43:14.194388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.258 [2024-07-15 09:43:14.307877] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:16:46.258 [2024-07-15 09:43:14.321133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:16:46.258 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:46.259 09:43:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:47.638 09:43:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:47.638 00:16:47.638 real 0m2.062s 00:16:47.638 user 0m1.316s 00:16:47.638 sys 0m0.760s 00:16:47.638 09:43:15 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.638 ************************************ 00:16:47.638 09:43:15 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:16:47.638 END TEST accel_decomp_full 00:16:47.638 ************************************ 00:16:47.638 09:43:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:47.638 09:43:15 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:47.638 09:43:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:47.638 09:43:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.638 09:43:15 accel -- common/autotest_common.sh@10 -- # set +x 00:16:47.638 ************************************ 00:16:47.638 START TEST accel_decomp_mcore 00:16:47.638 ************************************ 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:47.638 09:43:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fH1q6L -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:47.638 [2024-07-15 09:43:15.618211] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:47.638 [2024-07-15 09:43:15.618540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:48.583 EAL: TSC is not safe to use in SMP mode 00:16:48.583 EAL: TSC is not invariant 00:16:48.583 [2024-07-15 09:43:16.325536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.584 [2024-07-15 09:43:16.437705] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:48.584 [2024-07-15 09:43:16.437774] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:48.584 [2024-07-15 09:43:16.437781] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:16:48.584 [2024-07-15 09:43:16.437788] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:16:48.584 [2024-07-15 09:43:16.453416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.584 [2024-07-15 09:43:16.453275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.584 [2024-07-15 09:43:16.453338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.584 [2024-07-15 09:43:16.453413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:48.584 09:43:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.001 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:50.002 ************************************ 00:16:50.002 END TEST accel_decomp_mcore 00:16:50.002 ************************************ 00:16:50.002 00:16:50.002 real 0m2.076s 00:16:50.002 user 0m4.498s 00:16:50.002 sys 0m0.769s 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.002 09:43:17 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:16:50.002 09:43:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:50.002 09:43:17 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:50.002 09:43:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:50.002 09:43:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.002 09:43:17 accel -- common/autotest_common.sh@10 -- # set +x 00:16:50.002 ************************************ 00:16:50.002 START TEST accel_decomp_full_mcore 00:16:50.002 ************************************ 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:50.002 09:43:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.k4jYcl -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:50.002 [2024-07-15 09:43:17.750550] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:50.002 [2024-07-15 09:43:17.750836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:50.571 EAL: TSC is not safe to use in SMP mode 00:16:50.571 EAL: TSC is not invariant 00:16:50.571 [2024-07-15 09:43:18.458618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.571 [2024-07-15 09:43:18.570619] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:50.571 [2024-07-15 09:43:18.570708] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:50.571 [2024-07-15 09:43:18.570716] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:16:50.571 [2024-07-15 09:43:18.570722] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:16:50.571 [2024-07-15 09:43:18.582554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.571 [2024-07-15 09:43:18.582403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.571 [2024-07-15 09:43:18.582479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.571 [2024-07-15 09:43:18.582550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:50.571 09:43:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:51.953 00:16:51.953 real 0m2.082s 00:16:51.953 user 0m4.551s 00:16:51.953 sys 0m0.756s 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.953 09:43:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:16:51.953 ************************************ 00:16:51.953 END TEST accel_decomp_full_mcore 00:16:51.953 ************************************ 00:16:51.953 09:43:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:51.953 09:43:19 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:51.953 09:43:19 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:51.953 09:43:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.953 09:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:16:51.953 ************************************ 00:16:51.953 START TEST accel_decomp_mthread 00:16:51.953 ************************************ 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:51.953 09:43:19 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uoJEGi -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:51.953 [2024-07-15 09:43:19.889818] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:51.953 [2024-07-15 09:43:19.890142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:52.521 EAL: TSC is not safe to use in SMP mode 00:16:52.521 EAL: TSC is not invariant 00:16:52.521 [2024-07-15 09:43:20.594660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.781 [2024-07-15 09:43:20.707568] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:16:52.781 [2024-07-15 09:43:20.721698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.781 09:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:54.159 00:16:54.159 real 0m2.075s 00:16:54.159 user 0m1.323s 00:16:54.159 sys 0m0.762s 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.159 09:43:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:16:54.159 ************************************ 00:16:54.159 END TEST accel_decomp_mthread 00:16:54.159 ************************************ 00:16:54.159 09:43:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:54.159 09:43:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:54.159 09:43:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:54.159 09:43:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.159 09:43:21 accel -- common/autotest_common.sh@10 -- # set +x 00:16:54.159 ************************************ 00:16:54.159 START TEST accel_decomp_full_mthread 00:16:54.159 ************************************ 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:54.159 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.TuhXy5 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:54.159 [2024-07-15 09:43:22.021122] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:54.159 [2024-07-15 09:43:22.021466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:54.728 EAL: TSC is not safe to use in SMP mode 00:16:54.728 EAL: TSC is not invariant 00:16:54.728 [2024-07-15 09:43:22.729241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.987 [2024-07-15 09:43:22.841583] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:16:54.987 [2024-07-15 09:43:22.856194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.987 09:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:56.372 00:16:56.372 real 0m2.108s 00:16:56.372 user 0m1.342s 00:16:56.372 sys 0m0.776s 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.372 09:43:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:16:56.372 ************************************ 00:16:56.372 END TEST accel_decomp_full_mthread 00:16:56.372 ************************************ 00:16:56.372 09:43:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:56.372 09:43:24 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:16:56.372 09:43:24 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.oJogdY 00:16:56.372 09:43:24 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:56.372 09:43:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.372 09:43:24 accel -- common/autotest_common.sh@10 -- # set +x 00:16:56.372 ************************************ 00:16:56.372 START TEST accel_dif_functional_tests 00:16:56.372 ************************************ 00:16:56.372 09:43:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.oJogdY 00:16:56.372 [2024-07-15 09:43:24.186723] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:56.372 [2024-07-15 09:43:24.186971] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:56.963 EAL: TSC is not safe to use in SMP mode 00:16:56.963 EAL: TSC is not invariant 00:16:56.963 [2024-07-15 09:43:24.925479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.963 [2024-07-15 09:43:25.039659] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:56.963 [2024-07-15 09:43:25.039707] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:56.963 [2024-07-15 09:43:25.039714] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:16:56.963 09:43:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:16:56.963 09:43:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:56.963 09:43:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:56.963 09:43:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:56.963 09:43:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:56.963 09:43:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:56.963 09:43:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:56.963 09:43:25 accel -- accel/accel.sh@41 -- # jq -r . 00:16:57.222 [2024-07-15 09:43:25.054380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.222 [2024-07-15 09:43:25.054305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.222 [2024-07-15 09:43:25.054376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.222 00:16:57.222 00:16:57.222 CUnit - A unit testing framework for C - Version 2.1-3 00:16:57.222 http://cunit.sourceforge.net/ 00:16:57.222 00:16:57.222 00:16:57.222 Suite: accel_dif 00:16:57.222 Test: verify: DIF generated, GUARD check ...passed 00:16:57.222 Test: verify: DIF generated, APPTAG check ...passed 00:16:57.222 Test: verify: DIF generated, REFTAG check ...passed 00:16:57.222 Test: verify: DIF not generated, GUARD check ...passed 00:16:57.222 Test: verify: DIF not generated, APPTAG check ...passed 00:16:57.222 Test: verify: DIF not generated, REFTAG check ...passed 00:16:57.222 Test: verify: APPTAG correct, APPTAG check ...passed 00:16:57.222 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:16:57.222 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:16:57.222 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-15 09:43:25.075432] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:57.222 [2024-07-15 09:43:25.075496] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:57.222 [2024-07-15 09:43:25.075527] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:57.222 [2024-07-15 09:43:25.075590] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:16:57.222 passed 00:16:57.222 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:16:57.222 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 09:43:25.075685] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:16:57.222 passed 00:16:57.222 Test: verify copy: DIF generated, GUARD check ...passed 00:16:57.222 Test: verify copy: DIF generated, APPTAG check ...passed 00:16:57.222 Test: verify copy: DIF generated, REFTAG check ...passed 00:16:57.222 Test: verify copy: DIF not generated, GUARD check ...passed 00:16:57.222 Test: verify copy: DIF not generated, APPTAG check ...passed 00:16:57.222 Test: verify copy: DIF not generated, REFTAG check ...passed 00:16:57.222 Test: generate copy: DIF generated, GUARD check ...passed 00:16:57.222 Test: generate copy: DIF generated, APTTAG check ...passed 00:16:57.222 Test: generate copy: DIF generated, REFTAG check ...passed 00:16:57.222 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:16:57.222 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:16:57.222 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:16:57.222 Test: generate copy: iovecs-len validate ...passed 00:16:57.222 Test: generate copy: buffer alignment validate ...passed 00:16:57.222 00:16:57.222 Run Summary: Type Total Ran Passed Failed Inactive 00:16:57.222 suites 1 1 n/a 0 0 00:16:57.222 tests 26 26 26 0 0 00:16:57.222 asserts 115 115 115 0 n/a 00:16:57.222 00:16:57.222 Elapsed time = 0.000 seconds 00:16:57.222 [2024-07-15 09:43:25.075772] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:57.222 [2024-07-15 09:43:25.075801] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:57.222 [2024-07-15 09:43:25.075829] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:57.222 [2024-07-15 09:43:25.075966] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:16:57.481 00:16:57.481 real 0m1.157s 00:16:57.481 user 0m0.601s 00:16:57.481 sys 0m0.779s 00:16:57.481 09:43:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.481 09:43:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:16:57.481 ************************************ 00:16:57.481 END TEST accel_dif_functional_tests 00:16:57.481 ************************************ 00:16:57.481 09:43:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:16:57.481 00:16:57.481 real 0m47.634s 00:16:57.481 user 0m35.482s 00:16:57.481 sys 0m19.214s 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.481 09:43:25 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel -- common/autotest_common.sh@10 -- # set +x 00:16:57.481 ************************************ 00:16:57.481 END TEST accel 00:16:57.481 ************************************ 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.481 09:43:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:16:57.481 09:43:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.481 09:43:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:16:57.481 09:43:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.481 09:43:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.482 09:43:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.482 09:43:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:16:57.482 09:43:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:16:57.482 09:43:25 -- common/autotest_common.sh@1142 -- # return 0 00:16:57.482 09:43:25 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:57.482 09:43:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:57.482 09:43:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.482 09:43:25 -- common/autotest_common.sh@10 -- # set +x 00:16:57.482 ************************************ 00:16:57.482 START TEST accel_rpc 00:16:57.482 ************************************ 00:16:57.482 09:43:25 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:57.740 * Looking for test storage... 00:16:57.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:57.740 09:43:25 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:57.740 09:43:25 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47490 00:16:57.740 09:43:25 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:57.740 09:43:25 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47490 00:16:57.740 09:43:25 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47490 ']' 00:16:57.740 09:43:25 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.740 09:43:25 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.740 09:43:25 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.740 09:43:25 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.740 09:43:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.740 [2024-07-15 09:43:25.627163] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:57.740 [2024-07-15 09:43:25.627489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:58.306 EAL: TSC is not safe to use in SMP mode 00:16:58.306 EAL: TSC is not invariant 00:16:58.306 [2024-07-15 09:43:26.338664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.564 [2024-07-15 09:43:26.451387] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:58.564 [2024-07-15 09:43:26.454118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.564 09:43:26 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.564 09:43:26 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:58.564 09:43:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:16:58.564 09:43:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:16:58.564 09:43:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:16:58.564 09:43:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:16:58.564 09:43:26 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:16:58.564 09:43:26 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:58.564 09:43:26 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.564 09:43:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.564 ************************************ 00:16:58.564 START TEST accel_assign_opcode 00:16:58.564 ************************************ 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.564 [2024-07-15 09:43:26.606491] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.564 [2024-07-15 09:43:26.618480] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.564 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.822 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.822 09:43:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:16:58.822 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.823 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.823 09:43:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:16:58.823 09:43:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:16:58.823 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.823 software 00:16:58.823 00:16:58.823 real 0m0.085s 00:16:58.823 user 0m0.022s 00:16:58.823 sys 0m0.002s 00:16:58.823 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.823 09:43:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.823 ************************************ 00:16:58.823 END TEST accel_assign_opcode 00:16:58.823 ************************************ 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:16:58.823 09:43:26 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47490 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47490 ']' 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47490 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47490 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:16:58.823 killing process with pid 47490 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47490' 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@967 -- # kill 47490 00:16:58.823 09:43:26 accel_rpc -- common/autotest_common.sh@972 -- # wait 47490 00:16:59.080 00:16:59.080 real 0m1.689s 00:16:59.080 user 0m1.351s 00:16:59.080 sys 0m0.917s 00:16:59.080 09:43:27 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:59.080 09:43:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.080 ************************************ 00:16:59.080 END TEST accel_rpc 00:16:59.080 ************************************ 00:16:59.080 09:43:27 -- common/autotest_common.sh@1142 -- # return 0 00:16:59.080 09:43:27 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:59.080 09:43:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:59.080 09:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.080 09:43:27 -- common/autotest_common.sh@10 -- # set +x 00:16:59.339 ************************************ 00:16:59.339 START TEST app_cmdline 00:16:59.339 ************************************ 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:59.339 * Looking for test storage... 00:16:59.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:59.339 09:43:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:59.339 09:43:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47572 00:16:59.339 09:43:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47572 00:16:59.339 09:43:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47572 ']' 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.339 09:43:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:59.339 [2024-07-15 09:43:27.360639] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:59.339 [2024-07-15 09:43:27.360974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:00.278 EAL: TSC is not safe to use in SMP mode 00:17:00.278 EAL: TSC is not invariant 00:17:00.278 [2024-07-15 09:43:28.098807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.278 [2024-07-15 09:43:28.214726] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:00.278 [2024-07-15 09:43:28.217344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.278 09:43:28 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.278 09:43:28 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:17:00.278 09:43:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:00.537 { 00:17:00.537 "version": "SPDK v24.09-pre git sha1 62a72093c", 00:17:00.537 "fields": { 00:17:00.537 "major": 24, 00:17:00.537 "minor": 9, 00:17:00.537 "patch": 0, 00:17:00.537 "suffix": "-pre", 00:17:00.537 "commit": "62a72093c" 00:17:00.537 } 00:17:00.537 } 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:00.537 09:43:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:00.537 09:43:28 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:00.797 request: 00:17:00.797 { 00:17:00.797 "method": "env_dpdk_get_mem_stats", 00:17:00.797 "req_id": 1 00:17:00.797 } 00:17:00.797 Got JSON-RPC error response 00:17:00.797 response: 00:17:00.797 { 00:17:00.797 "code": -32601, 00:17:00.797 "message": "Method not found" 00:17:00.797 } 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.797 09:43:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47572 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47572 ']' 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47572 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47572 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:17:00.797 killing process with pid 47572 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47572' 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@967 -- # kill 47572 00:17:00.797 09:43:28 app_cmdline -- common/autotest_common.sh@972 -- # wait 47572 00:17:01.364 00:17:01.364 real 0m2.027s 00:17:01.364 user 0m1.991s 00:17:01.364 sys 0m1.034s 00:17:01.364 09:43:29 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.364 09:43:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:01.364 ************************************ 00:17:01.364 END TEST app_cmdline 00:17:01.364 ************************************ 00:17:01.364 09:43:29 -- common/autotest_common.sh@1142 -- # return 0 00:17:01.364 09:43:29 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:01.364 09:43:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:01.364 09:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.364 09:43:29 -- common/autotest_common.sh@10 -- # set +x 00:17:01.364 ************************************ 00:17:01.364 START TEST version 00:17:01.364 ************************************ 00:17:01.364 09:43:29 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:01.364 * Looking for test storage... 00:17:01.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:01.364 09:43:29 version -- app/version.sh@17 -- # get_header_version major 00:17:01.364 09:43:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # cut -f2 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:01.364 09:43:29 version -- app/version.sh@17 -- # major=24 00:17:01.364 09:43:29 version -- app/version.sh@18 -- # get_header_version minor 00:17:01.364 09:43:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # cut -f2 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:01.364 09:43:29 version -- app/version.sh@18 -- # minor=9 00:17:01.364 09:43:29 version -- app/version.sh@19 -- # get_header_version patch 00:17:01.364 09:43:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # cut -f2 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:01.364 09:43:29 version -- app/version.sh@19 -- # patch=0 00:17:01.364 09:43:29 version -- app/version.sh@20 -- # get_header_version suffix 00:17:01.364 09:43:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # cut -f2 00:17:01.364 09:43:29 version -- app/version.sh@14 -- # tr -d '"' 00:17:01.364 09:43:29 version -- app/version.sh@20 -- # suffix=-pre 00:17:01.364 09:43:29 version -- app/version.sh@22 -- # version=24.9 00:17:01.364 09:43:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:01.364 09:43:29 version -- app/version.sh@28 -- # version=24.9rc0 00:17:01.364 09:43:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:01.364 09:43:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:01.623 09:43:29 version -- app/version.sh@30 -- # py_version=24.9rc0 00:17:01.623 09:43:29 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:17:01.623 00:17:01.623 real 0m0.229s 00:17:01.623 user 0m0.166s 00:17:01.623 sys 0m0.149s 00:17:01.623 09:43:29 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.623 09:43:29 version -- common/autotest_common.sh@10 -- # set +x 00:17:01.623 ************************************ 00:17:01.623 END TEST version 00:17:01.623 ************************************ 00:17:01.623 09:43:29 -- common/autotest_common.sh@1142 -- # return 0 00:17:01.623 09:43:29 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:17:01.623 09:43:29 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:17:01.623 09:43:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:01.623 09:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.623 09:43:29 -- common/autotest_common.sh@10 -- # set +x 00:17:01.623 ************************************ 00:17:01.623 START TEST blockdev_general 00:17:01.623 ************************************ 00:17:01.623 09:43:29 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:17:01.623 * Looking for test storage... 00:17:01.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:01.882 09:43:29 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47707 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:17:01.882 09:43:29 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47707 00:17:01.882 09:43:29 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47707 ']' 00:17:01.882 09:43:29 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.882 09:43:29 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.882 09:43:29 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.882 09:43:29 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.882 09:43:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:01.882 [2024-07-15 09:43:29.745072] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:01.882 [2024-07-15 09:43:29.745400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:02.470 EAL: TSC is not safe to use in SMP mode 00:17:02.470 EAL: TSC is not invariant 00:17:02.470 [2024-07-15 09:43:30.481160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.730 [2024-07-15 09:43:30.595633] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:02.730 [2024-07-15 09:43:30.598133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.730 09:43:30 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.730 09:43:30 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:17:02.730 09:43:30 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:17:02.730 09:43:30 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:17:02.730 09:43:30 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:17:02.730 09:43:30 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.730 09:43:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 [2024-07-15 09:43:30.766288] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:02.730 [2024-07-15 09:43:30.766348] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:02.730 00:17:02.730 [2024-07-15 09:43:30.774297] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:02.730 [2024-07-15 09:43:30.774354] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:02.730 00:17:02.730 Malloc0 00:17:02.730 Malloc1 00:17:02.730 Malloc2 00:17:02.730 Malloc3 00:17:02.730 Malloc4 00:17:02.989 Malloc5 00:17:02.989 Malloc6 00:17:02.989 Malloc7 00:17:02.989 Malloc8 00:17:02.989 Malloc9 00:17:02.989 [2024-07-15 09:43:30.866284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:02.989 [2024-07-15 09:43:30.866347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.989 [2024-07-15 09:43:30.866382] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e5a6f63a980 00:17:02.989 [2024-07-15 09:43:30.866389] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.989 [2024-07-15 09:43:30.866842] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.989 [2024-07-15 09:43:30.866875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:02.989 TestPT 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.989 09:43:30 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:17:02.989 5000+0 records in 00:17:02.989 5000+0 records out 00:17:02.989 10240000 bytes transferred in 0.030244 secs (338578109 bytes/sec) 00:17:02.989 09:43:30 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 AIO0 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.989 09:43:30 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.989 09:43:30 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:17:02.989 09:43:30 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.989 09:43:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.989 09:43:31 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.989 09:43:31 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.989 09:43:31 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:17:02.989 09:43:31 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:17:02.989 09:43:31 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.989 09:43:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:03.250 09:43:31 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.250 09:43:31 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:17:03.250 09:43:31 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:17:03.251 09:43:31 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b1d0ebbc-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b1d0ebbc-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "cbe6b8b7-8297-2a51-b4b1-d73ac4937a6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cbe6b8b7-8297-2a51-b4b1-d73ac4937a6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "fb1f5c07-d043-9459-be99-a52a71fe4dc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fb1f5c07-d043-9459-be99-a52a71fe4dc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "e205e718-1c8f-1d54-bca6-87914e099cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e205e718-1c8f-1d54-bca6-87914e099cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "bfb71bc6-b849-0052-b455-908517d9da7f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bfb71bc6-b849-0052-b455-908517d9da7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d9bacc4a-455d-d85e-982c-7f95fa82813e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9bacc4a-455d-d85e-982c-7f95fa82813e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "190b4820-b188-ae50-8359-9b87ac805d1b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "190b4820-b188-ae50-8359-9b87ac805d1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "556fd9b4-5aaf-065f-980f-46f4147d70de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "556fd9b4-5aaf-065f-980f-46f4147d70de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e14efb45-c5b0-175a-acd2-e2d8fc065ed6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e14efb45-c5b0-175a-acd2-e2d8fc065ed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1c5eb6cc-4c6c-6058-8440-b61c26ca048e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1c5eb6cc-4c6c-6058-8440-b61c26ca048e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d0af103c-7b83-1055-8b03-9ce9647cd306"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0af103c-7b83-1055-8b03-9ce9647cd306",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "412bba8a-abe1-8653-a0df-19f23c7d9b5e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "412bba8a-abe1-8653-a0df-19f23c7d9b5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b1df0594-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b1df0594-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1df0594-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b1d5cd53-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b1d7058f-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b1e02dd6-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b1e02dd6-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1e02dd6-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b1d83dfc-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b1d97668-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b1e16605-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b1e16605-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1e16605-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b1daaf25-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b1dc83d2-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b1ea8f01-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b1ea8f01-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:17:03.251 09:43:31 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:17:03.251 09:43:31 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:17:03.251 09:43:31 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:17:03.251 09:43:31 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 47707 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47707 ']' 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47707 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47707 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:17:03.251 killing process with pid 47707 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47707' 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@967 -- # kill 47707 00:17:03.251 09:43:31 blockdev_general -- common/autotest_common.sh@972 -- # wait 47707 00:17:03.819 09:43:31 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:03.819 09:43:31 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:17:03.819 09:43:31 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:03.819 09:43:31 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.819 09:43:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:03.819 ************************************ 00:17:03.819 START TEST bdev_hello_world 00:17:03.819 ************************************ 00:17:03.819 09:43:31 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:17:03.819 [2024-07-15 09:43:31.749980] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:03.819 [2024-07-15 09:43:31.750270] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:04.387 EAL: TSC is not safe to use in SMP mode 00:17:04.387 EAL: TSC is not invariant 00:17:04.387 [2024-07-15 09:43:32.442594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.646 [2024-07-15 09:43:32.551620] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:04.646 [2024-07-15 09:43:32.554093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.646 [2024-07-15 09:43:32.615197] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:04.646 [2024-07-15 09:43:32.615238] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:04.646 [2024-07-15 09:43:32.623177] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:04.646 [2024-07-15 09:43:32.623197] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:04.646 [2024-07-15 09:43:32.631193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:04.646 [2024-07-15 09:43:32.631214] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:04.646 [2024-07-15 09:43:32.631220] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:04.646 [2024-07-15 09:43:32.679195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:04.646 [2024-07-15 09:43:32.679233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.646 [2024-07-15 09:43:32.679241] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d207cc36800 00:17:04.646 [2024-07-15 09:43:32.679248] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.646 [2024-07-15 09:43:32.679554] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.646 [2024-07-15 09:43:32.679572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:04.905 [2024-07-15 09:43:32.779281] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:04.905 [2024-07-15 09:43:32.779322] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:17:04.905 [2024-07-15 09:43:32.779333] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:04.905 [2024-07-15 09:43:32.779344] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:04.905 [2024-07-15 09:43:32.779354] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:04.905 [2024-07-15 09:43:32.779361] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:04.905 [2024-07-15 09:43:32.779371] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:04.905 00:17:04.905 [2024-07-15 09:43:32.779378] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:05.214 00:17:05.214 real 0m1.375s 00:17:05.214 user 0m0.628s 00:17:05.214 sys 0m0.744s 00:17:05.214 09:43:33 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.215 09:43:33 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:05.215 ************************************ 00:17:05.215 END TEST bdev_hello_world 00:17:05.215 ************************************ 00:17:05.215 09:43:33 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:05.215 09:43:33 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:17:05.215 09:43:33 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:05.215 09:43:33 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.215 09:43:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:05.215 ************************************ 00:17:05.215 START TEST bdev_bounds 00:17:05.215 ************************************ 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=47759 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:05.215 Process bdevio pid: 47759 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 47759' 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 47759 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47759 ']' 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.215 09:43:33 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:05.215 [2024-07-15 09:43:33.185120] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:05.215 [2024-07-15 09:43:33.185443] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:05.800 EAL: TSC is not safe to use in SMP mode 00:17:05.800 EAL: TSC is not invariant 00:17:06.060 [2024-07-15 09:43:33.893047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:06.060 [2024-07-15 09:43:34.008437] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:06.060 [2024-07-15 09:43:34.008511] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:06.060 [2024-07-15 09:43:34.008520] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:17:06.060 [2024-07-15 09:43:34.012574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.060 [2024-07-15 09:43:34.012528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.060 [2024-07-15 09:43:34.012575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.060 [2024-07-15 09:43:34.074554] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:06.060 [2024-07-15 09:43:34.074626] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:06.060 [2024-07-15 09:43:34.082533] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:06.060 [2024-07-15 09:43:34.082562] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:06.060 [2024-07-15 09:43:34.090555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:06.060 [2024-07-15 09:43:34.090586] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:06.060 [2024-07-15 09:43:34.090593] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:06.060 [2024-07-15 09:43:34.138556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:06.060 [2024-07-15 09:43:34.138626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.060 [2024-07-15 09:43:34.138635] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cac8cc36800 00:17:06.060 [2024-07-15 09:43:34.138642] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.060 [2024-07-15 09:43:34.139106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.060 [2024-07-15 09:43:34.139133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:06.319 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.319 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:17:06.319 09:43:34 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:06.319 I/O targets: 00:17:06.319 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:17:06.319 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:17:06.319 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:17:06.319 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:17:06.319 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:17:06.319 raid0: 131072 blocks of 512 bytes (64 MiB) 00:17:06.319 concat0: 131072 blocks of 512 bytes (64 MiB) 00:17:06.319 raid1: 65536 blocks of 512 bytes (32 MiB) 00:17:06.319 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:17:06.319 00:17:06.319 00:17:06.319 CUnit - A unit testing framework for C - Version 2.1-3 00:17:06.319 http://cunit.sourceforge.net/ 00:17:06.319 00:17:06.319 00:17:06.319 Suite: bdevio tests on: AIO0 00:17:06.319 Test: blockdev write read block ...passed 00:17:06.319 Test: blockdev write zeroes read block ...passed 00:17:06.320 Test: blockdev write zeroes read no split ...passed 00:17:06.320 Test: blockdev write zeroes read split ...passed 00:17:06.580 Test: blockdev write zeroes read split partial ...passed 00:17:06.580 Test: blockdev reset ...passed 00:17:06.580 Test: blockdev write read 8 blocks ...passed 00:17:06.580 Test: blockdev write read size > 128k ...passed 00:17:06.580 Test: blockdev write read invalid size ...passed 00:17:06.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.580 Test: blockdev write read max offset ...passed 00:17:06.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.580 Test: blockdev writev readv 8 blocks ...passed 00:17:06.580 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.580 Test: blockdev writev readv block ...passed 00:17:06.580 Test: blockdev writev readv size > 128k ...passed 00:17:06.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.580 Test: blockdev comparev and writev ...passed 00:17:06.580 Test: blockdev nvme passthru rw ...passed 00:17:06.580 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.580 Test: blockdev nvme admin passthru ...passed 00:17:06.580 Test: blockdev copy ...passed 00:17:06.580 Suite: bdevio tests on: raid1 00:17:06.580 Test: blockdev write read block ...passed 00:17:06.580 Test: blockdev write zeroes read block ...passed 00:17:06.580 Test: blockdev write zeroes read no split ...passed 00:17:06.580 Test: blockdev write zeroes read split ...passed 00:17:06.580 Test: blockdev write zeroes read split partial ...passed 00:17:06.580 Test: blockdev reset ...passed 00:17:06.580 Test: blockdev write read 8 blocks ...passed 00:17:06.580 Test: blockdev write read size > 128k ...passed 00:17:06.580 Test: blockdev write read invalid size ...passed 00:17:06.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.580 Test: blockdev write read max offset ...passed 00:17:06.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.580 Test: blockdev writev readv 8 blocks ...passed 00:17:06.580 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.580 Test: blockdev writev readv block ...passed 00:17:06.580 Test: blockdev writev readv size > 128k ...passed 00:17:06.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.580 Test: blockdev comparev and writev ...passed 00:17:06.580 Test: blockdev nvme passthru rw ...passed 00:17:06.580 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.580 Test: blockdev nvme admin passthru ...passed 00:17:06.580 Test: blockdev copy ...passed 00:17:06.580 Suite: bdevio tests on: concat0 00:17:06.580 Test: blockdev write read block ...passed 00:17:06.580 Test: blockdev write zeroes read block ...passed 00:17:06.580 Test: blockdev write zeroes read no split ...passed 00:17:06.580 Test: blockdev write zeroes read split ...passed 00:17:06.580 Test: blockdev write zeroes read split partial ...passed 00:17:06.580 Test: blockdev reset ...passed 00:17:06.580 Test: blockdev write read 8 blocks ...passed 00:17:06.580 Test: blockdev write read size > 128k ...passed 00:17:06.580 Test: blockdev write read invalid size ...passed 00:17:06.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.580 Test: blockdev write read max offset ...passed 00:17:06.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.580 Test: blockdev writev readv 8 blocks ...passed 00:17:06.580 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: raid0 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: TestPT 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: Malloc2p7 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: Malloc2p6 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: Malloc2p5 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: Malloc2p4 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: Malloc2p3 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.581 Test: blockdev writev readv 8 blocks ...passed 00:17:06.581 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.581 Test: blockdev writev readv block ...passed 00:17:06.581 Test: blockdev writev readv size > 128k ...passed 00:17:06.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.581 Test: blockdev comparev and writev ...passed 00:17:06.581 Test: blockdev nvme passthru rw ...passed 00:17:06.581 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.581 Test: blockdev nvme admin passthru ...passed 00:17:06.581 Test: blockdev copy ...passed 00:17:06.581 Suite: bdevio tests on: Malloc2p2 00:17:06.581 Test: blockdev write read block ...passed 00:17:06.581 Test: blockdev write zeroes read block ...passed 00:17:06.581 Test: blockdev write zeroes read no split ...passed 00:17:06.581 Test: blockdev write zeroes read split ...passed 00:17:06.581 Test: blockdev write zeroes read split partial ...passed 00:17:06.581 Test: blockdev reset ...passed 00:17:06.581 Test: blockdev write read 8 blocks ...passed 00:17:06.581 Test: blockdev write read size > 128k ...passed 00:17:06.581 Test: blockdev write read invalid size ...passed 00:17:06.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.581 Test: blockdev write read max offset ...passed 00:17:06.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.582 Test: blockdev writev readv 8 blocks ...passed 00:17:06.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.582 Test: blockdev writev readv block ...passed 00:17:06.582 Test: blockdev writev readv size > 128k ...passed 00:17:06.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.582 Test: blockdev comparev and writev ...passed 00:17:06.582 Test: blockdev nvme passthru rw ...passed 00:17:06.582 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.582 Test: blockdev nvme admin passthru ...passed 00:17:06.582 Test: blockdev copy ...passed 00:17:06.582 Suite: bdevio tests on: Malloc2p1 00:17:06.582 Test: blockdev write read block ...passed 00:17:06.582 Test: blockdev write zeroes read block ...passed 00:17:06.582 Test: blockdev write zeroes read no split ...passed 00:17:06.582 Test: blockdev write zeroes read split ...passed 00:17:06.582 Test: blockdev write zeroes read split partial ...passed 00:17:06.582 Test: blockdev reset ...passed 00:17:06.582 Test: blockdev write read 8 blocks ...passed 00:17:06.582 Test: blockdev write read size > 128k ...passed 00:17:06.582 Test: blockdev write read invalid size ...passed 00:17:06.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.582 Test: blockdev write read max offset ...passed 00:17:06.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.582 Test: blockdev writev readv 8 blocks ...passed 00:17:06.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.582 Test: blockdev writev readv block ...passed 00:17:06.582 Test: blockdev writev readv size > 128k ...passed 00:17:06.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.582 Test: blockdev comparev and writev ...passed 00:17:06.582 Test: blockdev nvme passthru rw ...passed 00:17:06.582 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.582 Test: blockdev nvme admin passthru ...passed 00:17:06.582 Test: blockdev copy ...passed 00:17:06.582 Suite: bdevio tests on: Malloc2p0 00:17:06.582 Test: blockdev write read block ...passed 00:17:06.582 Test: blockdev write zeroes read block ...passed 00:17:06.582 Test: blockdev write zeroes read no split ...passed 00:17:06.582 Test: blockdev write zeroes read split ...passed 00:17:06.582 Test: blockdev write zeroes read split partial ...passed 00:17:06.582 Test: blockdev reset ...passed 00:17:06.582 Test: blockdev write read 8 blocks ...passed 00:17:06.582 Test: blockdev write read size > 128k ...passed 00:17:06.582 Test: blockdev write read invalid size ...passed 00:17:06.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.582 Test: blockdev write read max offset ...passed 00:17:06.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.582 Test: blockdev writev readv 8 blocks ...passed 00:17:06.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.582 Test: blockdev writev readv block ...passed 00:17:06.582 Test: blockdev writev readv size > 128k ...passed 00:17:06.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.582 Test: blockdev comparev and writev ...passed 00:17:06.582 Test: blockdev nvme passthru rw ...passed 00:17:06.582 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.582 Test: blockdev nvme admin passthru ...passed 00:17:06.582 Test: blockdev copy ...passed 00:17:06.582 Suite: bdevio tests on: Malloc1p1 00:17:06.582 Test: blockdev write read block ...passed 00:17:06.582 Test: blockdev write zeroes read block ...passed 00:17:06.582 Test: blockdev write zeroes read no split ...passed 00:17:06.582 Test: blockdev write zeroes read split ...passed 00:17:06.582 Test: blockdev write zeroes read split partial ...passed 00:17:06.582 Test: blockdev reset ...passed 00:17:06.582 Test: blockdev write read 8 blocks ...passed 00:17:06.582 Test: blockdev write read size > 128k ...passed 00:17:06.582 Test: blockdev write read invalid size ...passed 00:17:06.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.582 Test: blockdev write read max offset ...passed 00:17:06.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.582 Test: blockdev writev readv 8 blocks ...passed 00:17:06.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.582 Test: blockdev writev readv block ...passed 00:17:06.582 Test: blockdev writev readv size > 128k ...passed 00:17:06.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.582 Test: blockdev comparev and writev ...passed 00:17:06.582 Test: blockdev nvme passthru rw ...passed 00:17:06.582 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.582 Test: blockdev nvme admin passthru ...passed 00:17:06.582 Test: blockdev copy ...passed 00:17:06.582 Suite: bdevio tests on: Malloc1p0 00:17:06.582 Test: blockdev write read block ...passed 00:17:06.582 Test: blockdev write zeroes read block ...passed 00:17:06.582 Test: blockdev write zeroes read no split ...passed 00:17:06.582 Test: blockdev write zeroes read split ...passed 00:17:06.582 Test: blockdev write zeroes read split partial ...passed 00:17:06.582 Test: blockdev reset ...passed 00:17:06.582 Test: blockdev write read 8 blocks ...passed 00:17:06.582 Test: blockdev write read size > 128k ...passed 00:17:06.582 Test: blockdev write read invalid size ...passed 00:17:06.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.582 Test: blockdev write read max offset ...passed 00:17:06.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.582 Test: blockdev writev readv 8 blocks ...passed 00:17:06.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.582 Test: blockdev writev readv block ...passed 00:17:06.582 Test: blockdev writev readv size > 128k ...passed 00:17:06.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.582 Test: blockdev comparev and writev ...passed 00:17:06.582 Test: blockdev nvme passthru rw ...passed 00:17:06.582 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.582 Test: blockdev nvme admin passthru ...passed 00:17:06.582 Test: blockdev copy ...passed 00:17:06.582 Suite: bdevio tests on: Malloc0 00:17:06.582 Test: blockdev write read block ...passed 00:17:06.582 Test: blockdev write zeroes read block ...passed 00:17:06.582 Test: blockdev write zeroes read no split ...passed 00:17:06.582 Test: blockdev write zeroes read split ...passed 00:17:06.582 Test: blockdev write zeroes read split partial ...passed 00:17:06.582 Test: blockdev reset ...passed 00:17:06.582 Test: blockdev write read 8 blocks ...passed 00:17:06.582 Test: blockdev write read size > 128k ...passed 00:17:06.582 Test: blockdev write read invalid size ...passed 00:17:06.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.582 Test: blockdev write read max offset ...passed 00:17:06.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.582 Test: blockdev writev readv 8 blocks ...passed 00:17:06.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.582 Test: blockdev writev readv block ...passed 00:17:06.582 Test: blockdev writev readv size > 128k ...passed 00:17:06.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.582 Test: blockdev comparev and writev ...passed 00:17:06.582 Test: blockdev nvme passthru rw ...passed 00:17:06.582 Test: blockdev nvme passthru vendor specific ...passed 00:17:06.582 Test: blockdev nvme admin passthru ...passed 00:17:06.582 Test: blockdev copy ...passed 00:17:06.582 00:17:06.582 Run Summary: Type Total Ran Passed Failed Inactive 00:17:06.582 suites 16 16 n/a 0 0 00:17:06.582 tests 368 368 368 0 0 00:17:06.582 asserts 2224 2224 2224 0 n/a 00:17:06.582 00:17:06.582 Elapsed time = 0.594 seconds 00:17:06.582 0 00:17:06.582 09:43:34 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 47759 00:17:06.582 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47759 ']' 00:17:06.582 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47759 00:17:06.582 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47759 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:17:06.842 killing process with pid 47759 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47759' 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47759 00:17:06.842 09:43:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47759 00:17:07.101 09:43:35 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:17:07.101 00:17:07.101 real 0m1.850s 00:17:07.101 user 0m3.007s 00:17:07.101 sys 0m0.918s 00:17:07.101 09:43:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.101 09:43:35 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:07.101 ************************************ 00:17:07.101 END TEST bdev_bounds 00:17:07.101 ************************************ 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:07.101 09:43:35 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:07.101 ************************************ 00:17:07.101 START TEST bdev_nbd 00:17:07.101 ************************************ 00:17:07.101 09:43:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:17:07.101 09:43:35 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:17:07.101 09:43:35 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:17:07.101 09:43:35 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:17:07.101 00:17:07.101 real 0m0.006s 00:17:07.101 user 0m0.001s 00:17:07.101 sys 0m0.008s 00:17:07.101 09:43:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.101 ************************************ 00:17:07.101 END TEST bdev_nbd 00:17:07.101 ************************************ 00:17:07.101 09:43:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:07.101 09:43:35 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:17:07.101 09:43:35 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:17:07.101 09:43:35 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:17:07.101 09:43:35 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.101 09:43:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:07.101 ************************************ 00:17:07.101 START TEST bdev_fio 00:17:07.101 ************************************ 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:07.101 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:07.101 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.039 09:43:35 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:08.039 ************************************ 00:17:08.039 START TEST bdev_fio_rw_verify 00:17:08.039 ************************************ 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:08.039 09:43:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.039 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.039 fio-3.35 00:17:08.039 Starting 16 threads 00:17:08.973 EAL: TSC is not safe to use in SMP mode 00:17:08.973 EAL: TSC is not invariant 00:17:21.379 00:17:21.379 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101355: Mon Jul 15 09:43:47 2024 00:17:21.379 read: IOPS=275k, BW=1073MiB/s (1125MB/s)(10.5GiB/10005msec) 00:17:21.379 slat (nsec): min=218, max=2044.9M, avg=3968.30, stdev=1282200.73 00:17:21.379 clat (nsec): min=618, max=2046.7M, avg=48260.82, stdev=3512398.29 00:17:21.379 lat (nsec): min=1624, max=2046.7M, avg=52229.12, stdev=3739780.55 00:17:21.379 clat percentiles (usec): 00:17:21.379 | 50.000th=[ 8], 99.000th=[ 775], 99.900th=[ 857], 00:17:21.379 | 99.990th=[ 88605], 99.999th=[162530] 00:17:21.379 write: IOPS=459k, BW=1795MiB/s (1882MB/s)(17.4GiB/9914msec); 0 zone resets 00:17:21.379 slat (nsec): min=448, max=952793k, avg=18277.68, stdev=953295.65 00:17:21.379 clat (nsec): min=564, max=952870k, avg=89347.32, stdev=2033434.59 00:17:21.379 lat (usec): min=9, max=952879, avg=107.62, stdev=2246.35 00:17:21.379 clat percentiles (usec): 00:17:21.379 | 50.000th=[ 42], 99.000th=[ 750], 99.900th=[ 1991], 00:17:21.379 | 99.990th=[ 94897], 99.999th=[158335] 00:17:21.379 bw ( MiB/s): min= 674, max= 2865, per=99.63%, avg=1788.17, stdev=45.97, samples=293 00:17:21.379 iops : min=172720, max=733624, avg=457771.32, stdev=11767.88, samples=293 00:17:21.379 lat (nsec) : 750=0.01%, 1000=0.01% 00:17:21.379 lat (usec) : 2=0.12%, 4=14.47%, 10=18.25%, 20=18.70%, 50=22.65% 00:17:21.379 lat (usec) : 100=23.64%, 250=0.67%, 500=0.07%, 750=0.22%, 1000=1.09% 00:17:21.379 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01% 00:17:21.379 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 1000=0.01%, >=2000=0.01% 00:17:21.379 cpu : usr=55.99%, sys=3.23%, ctx=904045, majf=0, minf=646 00:17:21.379 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.379 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.379 issued rwts: total=2748097,4555176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.379 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:21.379 00:17:21.379 Run status group 0 (all jobs): 00:17:21.379 READ: bw=1073MiB/s (1125MB/s), 1073MiB/s-1073MiB/s (1125MB/s-1125MB/s), io=10.5GiB (11.3GB), run=10005-10005msec 00:17:21.379 WRITE: bw=1795MiB/s (1882MB/s), 1795MiB/s-1795MiB/s (1882MB/s-1882MB/s), io=17.4GiB (18.7GB), run=9914-9914msec 00:17:21.379 00:17:21.379 real 0m12.444s 00:17:21.379 user 1m33.969s 00:17:21.379 sys 0m7.781s 00:17:21.379 09:43:48 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.379 09:43:48 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:21.379 ************************************ 00:17:21.379 END TEST bdev_fio_rw_verify 00:17:21.379 ************************************ 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:21.379 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:21.380 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:21.380 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:21.380 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:21.380 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:21.380 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:21.380 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:21.381 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b1d0ebbc-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b1d0ebbc-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "cbe6b8b7-8297-2a51-b4b1-d73ac4937a6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cbe6b8b7-8297-2a51-b4b1-d73ac4937a6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "fb1f5c07-d043-9459-be99-a52a71fe4dc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fb1f5c07-d043-9459-be99-a52a71fe4dc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "e205e718-1c8f-1d54-bca6-87914e099cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e205e718-1c8f-1d54-bca6-87914e099cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "bfb71bc6-b849-0052-b455-908517d9da7f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bfb71bc6-b849-0052-b455-908517d9da7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d9bacc4a-455d-d85e-982c-7f95fa82813e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9bacc4a-455d-d85e-982c-7f95fa82813e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "190b4820-b188-ae50-8359-9b87ac805d1b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "190b4820-b188-ae50-8359-9b87ac805d1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "556fd9b4-5aaf-065f-980f-46f4147d70de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "556fd9b4-5aaf-065f-980f-46f4147d70de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e14efb45-c5b0-175a-acd2-e2d8fc065ed6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e14efb45-c5b0-175a-acd2-e2d8fc065ed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1c5eb6cc-4c6c-6058-8440-b61c26ca048e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1c5eb6cc-4c6c-6058-8440-b61c26ca048e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d0af103c-7b83-1055-8b03-9ce9647cd306"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0af103c-7b83-1055-8b03-9ce9647cd306",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "412bba8a-abe1-8653-a0df-19f23c7d9b5e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "412bba8a-abe1-8653-a0df-19f23c7d9b5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b1df0594-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b1df0594-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1df0594-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b1d5cd53-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b1d7058f-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b1e02dd6-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b1e02dd6-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1e02dd6-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b1d83dfc-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b1d97668-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b1e16605-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b1e16605-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1e16605-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b1daaf25-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b1dc83d2-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b1ea8f01-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b1ea8f01-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:17:21.381 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:17:21.381 Malloc1p0 00:17:21.381 Malloc1p1 00:17:21.381 Malloc2p0 00:17:21.381 Malloc2p1 00:17:21.381 Malloc2p2 00:17:21.381 Malloc2p3 00:17:21.381 Malloc2p4 00:17:21.381 Malloc2p5 00:17:21.381 Malloc2p6 00:17:21.381 Malloc2p7 00:17:21.381 TestPT 00:17:21.381 raid0 00:17:21.381 concat0 ]] 00:17:21.381 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b1d0ebbc-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b1d0ebbc-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "cbe6b8b7-8297-2a51-b4b1-d73ac4937a6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cbe6b8b7-8297-2a51-b4b1-d73ac4937a6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "fb1f5c07-d043-9459-be99-a52a71fe4dc0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fb1f5c07-d043-9459-be99-a52a71fe4dc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "e205e718-1c8f-1d54-bca6-87914e099cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e205e718-1c8f-1d54-bca6-87914e099cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "bfb71bc6-b849-0052-b455-908517d9da7f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bfb71bc6-b849-0052-b455-908517d9da7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d9bacc4a-455d-d85e-982c-7f95fa82813e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9bacc4a-455d-d85e-982c-7f95fa82813e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "190b4820-b188-ae50-8359-9b87ac805d1b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "190b4820-b188-ae50-8359-9b87ac805d1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "556fd9b4-5aaf-065f-980f-46f4147d70de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "556fd9b4-5aaf-065f-980f-46f4147d70de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e14efb45-c5b0-175a-acd2-e2d8fc065ed6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e14efb45-c5b0-175a-acd2-e2d8fc065ed6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1c5eb6cc-4c6c-6058-8440-b61c26ca048e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1c5eb6cc-4c6c-6058-8440-b61c26ca048e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d0af103c-7b83-1055-8b03-9ce9647cd306"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0af103c-7b83-1055-8b03-9ce9647cd306",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "412bba8a-abe1-8653-a0df-19f23c7d9b5e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "412bba8a-abe1-8653-a0df-19f23c7d9b5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b1df0594-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b1df0594-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1df0594-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b1d5cd53-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b1d7058f-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b1e02dd6-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b1e02dd6-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1e02dd6-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b1d83dfc-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b1d97668-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b1e16605-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b1e16605-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b1e16605-428e-11ef-a0af-c98d8ee52a94",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b1daaf25-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b1dc83d2-428e-11ef-a0af-c98d8ee52a94",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b1ea8f01-428e-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b1ea8f01-428e-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:21.382 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:17:21.383 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:17:21.383 09:43:48 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:21.383 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:21.383 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.383 09:43:48 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:21.383 ************************************ 00:17:21.383 START TEST bdev_fio_trim 00:17:21.383 ************************************ 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:21.383 09:43:48 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:21.383 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:21.383 fio-3.35 00:17:21.383 Starting 14 threads 00:17:21.383 EAL: TSC is not safe to use in SMP mode 00:17:21.383 EAL: TSC is not invariant 00:17:33.585 00:17:33.585 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101374: Mon Jul 15 09:43:59 2024 00:17:33.585 write: IOPS=2321k, BW=9066MiB/s (9506MB/s)(88.5GiB/10001msec); 0 zone resets 00:17:33.585 slat (nsec): min=259, max=103249k, avg=1173.05, stdev=170732.12 00:17:33.585 clat (nsec): min=1009, max=2095.0M, avg=16299.47, stdev=1751075.62 00:17:33.585 lat (nsec): min=1645, max=2095.0M, avg=17472.52, stdev=1759379.21 00:17:33.585 clat percentiles (usec): 00:17:33.585 | 50.000th=[ 6], 99.000th=[ 21], 99.900th=[ 938], 99.990th=[10159], 00:17:33.585 | 99.999th=[94897] 00:17:33.585 bw ( MiB/s): min= 3274, max=15045, per=100.00%, avg=9312.55, stdev=261.15, samples=257 00:17:33.585 iops : min=838390, max=3851758, avg=2384014.12, stdev=66855.54, samples=257 00:17:33.585 trim: IOPS=2321k, BW=9066MiB/s (9506MB/s)(88.5GiB/10001msec); 0 zone resets 00:17:33.585 slat (nsec): min=479, max=1220.5M, avg=2302.42, stdev=391964.22 00:17:33.585 clat (nsec): min=318, max=1220.6M, avg=11248.15, stdev=855365.13 00:17:33.586 lat (nsec): min=1683, max=1220.6M, avg=13550.57, stdev=940909.33 00:17:33.586 clat percentiles (usec): 00:17:33.586 | 50.000th=[ 7], 99.000th=[ 21], 99.900th=[ 31], 99.990th=[ 62], 00:17:33.586 | 99.999th=[94897] 00:17:33.586 bw ( MiB/s): min= 3274, max=15045, per=100.00%, avg=9312.56, stdev=261.15, samples=257 00:17:33.586 iops : min=838394, max=3851778, avg=2384016.14, stdev=66855.60, samples=257 00:17:33.586 lat (nsec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:17:33.586 lat (usec) : 2=0.15%, 4=25.65%, 10=55.50%, 20=17.55%, 50=0.93% 00:17:33.586 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.16% 00:17:33.586 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:17:33.586 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:17:33.586 lat (msec) : 2000=0.01%, >=2000=0.01% 00:17:33.586 cpu : usr=63.23%, sys=4.65%, ctx=1169326, majf=0, minf=0 00:17:33.586 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.586 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.586 issued rwts: total=0,23211027,23211033,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.586 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:33.586 00:17:33.586 Run status group 0 (all jobs): 00:17:33.586 WRITE: bw=9066MiB/s (9506MB/s), 9066MiB/s-9066MiB/s (9506MB/s-9506MB/s), io=88.5GiB (95.1GB), run=10001-10001msec 00:17:33.586 TRIM: bw=9066MiB/s (9506MB/s), 9066MiB/s-9066MiB/s (9506MB/s-9506MB/s), io=88.5GiB (95.1GB), run=10001-10001msec 00:17:33.586 00:17:33.586 real 0m12.424s 00:17:33.586 user 1m33.846s 00:17:33.586 sys 0m9.438s 00:17:33.586 ************************************ 00:17:33.586 END TEST bdev_fio_trim 00:17:33.586 ************************************ 00:17:33.586 09:44:00 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.586 09:44:00 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:33.586 /home/vagrant/spdk_repo/spdk 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:17:33.586 00:17:33.586 real 0m25.795s 00:17:33.586 user 3m8.154s 00:17:33.586 sys 0m17.780s 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.586 ************************************ 00:17:33.586 END TEST bdev_fio 00:17:33.586 ************************************ 00:17:33.586 09:44:00 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:33.586 09:44:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:33.586 09:44:00 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:33.586 09:44:00 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:33.586 09:44:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:33.586 09:44:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.586 09:44:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:33.586 ************************************ 00:17:33.586 START TEST bdev_verify 00:17:33.586 ************************************ 00:17:33.586 09:44:00 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:33.586 [2024-07-15 09:44:00.998304] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:33.586 [2024-07-15 09:44:00.998637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:33.844 EAL: TSC is not safe to use in SMP mode 00:17:33.844 EAL: TSC is not invariant 00:17:33.844 [2024-07-15 09:44:01.704200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:33.844 [2024-07-15 09:44:01.817382] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:33.844 [2024-07-15 09:44:01.817423] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:33.844 [2024-07-15 09:44:01.820539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.844 [2024-07-15 09:44:01.820535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.844 [2024-07-15 09:44:01.881947] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:33.844 [2024-07-15 09:44:01.881983] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:33.844 [2024-07-15 09:44:01.889928] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:33.844 [2024-07-15 09:44:01.889949] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:33.844 [2024-07-15 09:44:01.897941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:33.844 [2024-07-15 09:44:01.897962] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:33.844 [2024-07-15 09:44:01.897969] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:34.102 [2024-07-15 09:44:01.945954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:34.102 [2024-07-15 09:44:01.946027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.102 [2024-07-15 09:44:01.946036] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x56184a36800 00:17:34.102 [2024-07-15 09:44:01.946043] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.102 [2024-07-15 09:44:01.946529] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.102 [2024-07-15 09:44:01.946565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:34.102 Running I/O for 5 seconds... 00:17:39.398 00:17:39.398 Latency(us) 00:17:39.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.398 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.398 Verification LBA range: start 0x0 length 0x1000 00:17:39.399 Malloc0 : 5.01 6534.59 25.53 0.00 0.00 19581.53 48.24 44984.49 00:17:39.399 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x1000 length 0x1000 00:17:39.399 Malloc0 : 5.02 124.11 0.48 0.00 0.00 1031740.44 100.59 1345330.67 00:17:39.399 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x800 00:17:39.399 Malloc1p0 : 5.02 6172.48 24.11 0.00 0.00 20726.13 343.23 20179.96 00:17:39.399 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x800 length 0x800 00:17:39.399 Malloc1p0 : 5.01 6662.34 26.02 0.00 0.00 19201.92 326.81 26275.99 00:17:39.399 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x800 00:17:39.399 Malloc1p1 : 5.02 6172.08 24.11 0.00 0.00 20723.38 316.95 20600.38 00:17:39.399 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x800 length 0x800 00:17:39.399 Malloc1p1 : 5.01 6661.94 26.02 0.00 0.00 19198.95 328.45 25540.26 00:17:39.399 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p0 : 5.02 6171.72 24.11 0.00 0.00 20720.38 358.01 20390.17 00:17:39.399 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p0 : 5.02 6661.51 26.02 0.00 0.00 19196.01 344.87 24804.53 00:17:39.399 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p1 : 5.02 6171.35 24.11 0.00 0.00 20717.36 338.30 20074.86 00:17:39.399 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p1 : 5.02 6661.12 26.02 0.00 0.00 19193.43 326.81 23858.60 00:17:39.399 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p2 : 5.02 6170.97 24.11 0.00 0.00 20714.42 341.59 19654.44 00:17:39.399 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p2 : 5.02 6660.75 26.02 0.00 0.00 19190.94 348.16 22912.66 00:17:39.399 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p3 : 5.02 6170.61 24.10 0.00 0.00 20711.44 328.45 19234.02 00:17:39.399 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p3 : 5.02 6660.33 26.02 0.00 0.00 19187.88 326.81 21125.90 00:17:39.399 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p4 : 5.02 6170.22 24.10 0.00 0.00 20708.42 343.23 18708.50 00:17:39.399 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p4 : 5.02 6659.97 26.02 0.00 0.00 19185.72 343.23 19969.75 00:17:39.399 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p5 : 5.02 6169.86 24.10 0.00 0.00 20705.90 335.02 18288.09 00:17:39.399 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p5 : 5.02 6659.62 26.01 0.00 0.00 19183.08 313.67 19969.75 00:17:39.399 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p6 : 5.02 6169.53 24.10 0.00 0.00 20702.83 333.38 16816.63 00:17:39.399 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p6 : 5.02 6659.24 26.01 0.00 0.00 19179.96 316.95 21231.00 00:17:39.399 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x200 00:17:39.399 Malloc2p7 : 5.02 6169.15 24.10 0.00 0.00 20699.47 333.38 18393.19 00:17:39.399 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x200 length 0x200 00:17:39.399 Malloc2p7 : 5.02 6658.85 26.01 0.00 0.00 19177.53 326.81 22282.04 00:17:39.399 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x1000 00:17:39.399 TestPT : 5.02 6167.56 24.09 0.00 0.00 20697.46 331.73 18813.61 00:17:39.399 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x1000 length 0x1000 00:17:39.399 TestPT : 5.02 4837.05 18.89 0.00 0.00 26383.81 807.99 63903.21 00:17:39.399 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x2000 00:17:39.399 raid0 : 5.02 6168.56 24.10 0.00 0.00 20691.61 331.73 19234.02 00:17:39.399 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x2000 length 0x2000 00:17:39.399 raid0 : 5.02 6658.17 26.01 0.00 0.00 19171.36 335.02 23227.97 00:17:39.399 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x2000 00:17:39.399 concat0 : 5.02 6168.20 24.09 0.00 0.00 20688.42 328.45 19864.65 00:17:39.399 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x2000 length 0x2000 00:17:39.399 concat0 : 5.02 6657.78 26.01 0.00 0.00 19168.59 346.51 24594.33 00:17:39.399 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x1000 00:17:39.399 raid1 : 5.02 6167.84 24.09 0.00 0.00 20685.20 395.78 20390.17 00:17:39.399 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x1000 length 0x1000 00:17:39.399 raid1 : 5.02 6680.37 26.10 0.00 0.00 19099.87 349.80 26170.89 00:17:39.399 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x0 length 0x4e2 00:17:39.399 AIO0 : 5.24 548.28 2.14 0.00 0.00 232066.48 23648.39 329606.01 00:17:39.399 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.399 Verification LBA range: start 0x4e2 length 0x4e2 00:17:39.399 AIO0 : 5.24 539.40 2.11 0.00 0.00 235475.63 13663.51 316152.71 00:17:39.399 =================================================================================================================== 00:17:39.399 Total : 185565.58 724.87 0.00 0.00 22081.97 48.24 1345330.67 00:17:39.658 00:17:39.658 real 0m6.683s 00:17:39.658 user 0m10.454s 00:17:39.658 sys 0m0.928s 00:17:39.658 09:44:07 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.658 09:44:07 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:39.658 ************************************ 00:17:39.658 END TEST bdev_verify 00:17:39.658 ************************************ 00:17:39.658 09:44:07 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:39.658 09:44:07 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:39.658 09:44:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:39.658 09:44:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.658 09:44:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:39.658 ************************************ 00:17:39.658 START TEST bdev_verify_big_io 00:17:39.658 ************************************ 00:17:39.658 09:44:07 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:39.658 [2024-07-15 09:44:07.738895] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:39.658 [2024-07-15 09:44:07.739211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:40.226 EAL: TSC is not safe to use in SMP mode 00:17:40.226 EAL: TSC is not invariant 00:17:40.226 [2024-07-15 09:44:08.177003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.226 [2024-07-15 09:44:08.291060] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:40.226 [2024-07-15 09:44:08.291114] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:40.226 [2024-07-15 09:44:08.294317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.226 [2024-07-15 09:44:08.294306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.485 [2024-07-15 09:44:08.355874] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:40.485 [2024-07-15 09:44:08.355921] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:40.485 [2024-07-15 09:44:08.363858] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:40.485 [2024-07-15 09:44:08.363884] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:40.485 [2024-07-15 09:44:08.371874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:40.485 [2024-07-15 09:44:08.371896] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:40.485 [2024-07-15 09:44:08.371902] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:40.485 [2024-07-15 09:44:08.419876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:40.485 [2024-07-15 09:44:08.419920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.485 [2024-07-15 09:44:08.419929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1192b2c36800 00:17:40.485 [2024-07-15 09:44:08.419936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.485 [2024-07-15 09:44:08.420252] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.485 [2024-07-15 09:44:08.420273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:40.485 [2024-07-15 09:44:08.520821] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.520971] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.521065] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.521157] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.521237] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.521327] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.521419] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:17:40.485 [2024-07-15 09:44:08.521510] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.521594] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.521743] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.521850] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.521945] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.522035] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.522134] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.522231] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.522339] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:17:40.486 [2024-07-15 09:44:08.523557] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:17:40.486 [2024-07-15 09:44:08.523714] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:17:40.486 Running I/O for 5 seconds... 00:17:45.760 00:17:45.760 Latency(us) 00:17:45.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.760 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x100 00:17:45.760 Malloc0 : 5.05 3475.82 217.24 0.00 0.00 36735.57 78.01 92491.48 00:17:45.760 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x100 length 0x100 00:17:45.760 Malloc0 : 5.05 3346.10 209.13 0.00 0.00 38157.79 73.49 116034.77 00:17:45.760 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x80 00:17:45.760 Malloc1p0 : 5.09 879.46 54.97 0.00 0.00 144552.09 1340.08 198436.27 00:17:45.760 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x80 length 0x80 00:17:45.760 Malloc1p0 : 5.09 1167.19 72.95 0.00 0.00 108987.66 1583.13 150508.87 00:17:45.760 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x80 00:17:45.760 Malloc1p1 : 5.11 450.80 28.17 0.00 0.00 281572.89 758.72 334651.00 00:17:45.760 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x80 length 0x80 00:17:45.760 Malloc1p1 : 5.11 438.34 27.40 0.00 0.00 289626.28 709.45 326242.69 00:17:45.760 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p0 : 5.08 437.43 27.34 0.00 0.00 72596.95 574.79 117716.43 00:17:45.760 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p0 : 5.07 422.59 26.41 0.00 0.00 75095.42 578.07 106365.21 00:17:45.760 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p1 : 5.08 437.37 27.34 0.00 0.00 72560.17 541.94 116875.60 00:17:45.760 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p1 : 5.07 422.53 26.41 0.00 0.00 75064.90 541.94 105103.96 00:17:45.760 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p2 : 5.09 437.32 27.33 0.00 0.00 72546.63 561.65 116034.77 00:17:45.760 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p2 : 5.09 424.60 26.54 0.00 0.00 74727.37 558.36 104263.13 00:17:45.760 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p3 : 5.09 437.27 27.33 0.00 0.00 72524.25 558.36 115193.94 00:17:45.760 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p3 : 5.09 424.55 26.53 0.00 0.00 74706.35 558.36 103422.30 00:17:45.760 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p4 : 5.09 437.21 27.33 0.00 0.00 72497.05 578.07 113512.28 00:17:45.760 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p4 : 5.09 424.50 26.53 0.00 0.00 74673.89 568.22 102161.05 00:17:45.760 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p5 : 5.09 437.16 27.32 0.00 0.00 72462.04 528.80 112671.44 00:17:45.760 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p5 : 5.09 424.45 26.53 0.00 0.00 74649.89 528.80 101320.22 00:17:45.760 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p6 : 5.09 437.11 27.32 0.00 0.00 72441.49 509.10 111830.61 00:17:45.760 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p6 : 5.09 424.41 26.53 0.00 0.00 74614.66 515.67 100479.38 00:17:45.760 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x20 00:17:45.760 Malloc2p7 : 5.09 437.06 27.32 0.00 0.00 72405.41 541.94 110989.78 00:17:45.760 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x20 length 0x20 00:17:45.760 Malloc2p7 : 5.09 424.36 26.52 0.00 0.00 74593.06 545.23 99218.14 00:17:45.760 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x100 00:17:45.760 TestPT : 5.15 447.99 28.00 0.00 0.00 280868.69 5859.55 290927.76 00:17:45.760 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x100 length 0x100 00:17:45.760 TestPT : 5.22 353.39 22.09 0.00 0.00 355499.06 5859.55 410325.86 00:17:45.760 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x200 00:17:45.760 raid0 : 5.11 453.72 28.36 0.00 0.00 278137.85 528.80 314471.04 00:17:45.760 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x200 length 0x200 00:17:45.760 raid0 : 5.11 441.34 27.58 0.00 0.00 286131.84 518.95 307744.39 00:17:45.760 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x200 00:17:45.760 concat0 : 5.11 453.70 28.36 0.00 0.00 277666.42 522.24 307744.39 00:17:45.760 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x200 length 0x200 00:17:45.760 concat0 : 5.11 441.31 27.58 0.00 0.00 285697.44 509.10 301017.74 00:17:45.760 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x100 00:17:45.760 raid1 : 5.11 456.77 28.55 0.00 0.00 275524.37 752.15 299336.07 00:17:45.760 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x100 length 0x100 00:17:45.760 raid1 : 5.11 444.66 27.79 0.00 0.00 283130.97 624.05 292609.42 00:17:45.760 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x0 length 0x4e 00:17:45.760 AIO0 : 5.11 450.75 28.17 0.00 0.00 169948.12 696.31 180778.81 00:17:45.760 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:17:45.760 Verification LBA range: start 0x4e length 0x4e 00:17:45.760 AIO0 : 5.10 435.38 27.21 0.00 0.00 176014.70 1031.33 173211.32 00:17:45.760 =================================================================================================================== 00:17:45.760 Total : 21026.60 1314.16 0.00 0.00 116146.01 73.49 410325.86 00:17:46.088 00:17:46.088 real 0m6.434s 00:17:46.088 user 0m11.545s 00:17:46.088 sys 0m0.555s 00:17:46.088 09:44:14 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.088 09:44:14 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.088 ************************************ 00:17:46.088 END TEST bdev_verify_big_io 00:17:46.088 ************************************ 00:17:46.348 09:44:14 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:46.348 09:44:14 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.348 09:44:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:46.348 09:44:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.348 09:44:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:46.348 ************************************ 00:17:46.348 START TEST bdev_write_zeroes 00:17:46.348 ************************************ 00:17:46.348 09:44:14 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.348 [2024-07-15 09:44:14.236098] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:46.348 [2024-07-15 09:44:14.236405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:46.916 EAL: TSC is not safe to use in SMP mode 00:17:46.916 EAL: TSC is not invariant 00:17:46.916 [2024-07-15 09:44:14.811751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.916 [2024-07-15 09:44:14.924625] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:46.916 [2024-07-15 09:44:14.927170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.916 [2024-07-15 09:44:14.988475] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:46.916 [2024-07-15 09:44:14.988526] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:46.916 [2024-07-15 09:44:14.996462] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:46.916 [2024-07-15 09:44:14.996483] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:47.175 [2024-07-15 09:44:15.004481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:47.175 [2024-07-15 09:44:15.004503] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:47.175 [2024-07-15 09:44:15.004509] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:47.175 [2024-07-15 09:44:15.052480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:47.175 [2024-07-15 09:44:15.052533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.175 [2024-07-15 09:44:15.052541] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb4219a36800 00:17:47.175 [2024-07-15 09:44:15.052548] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.175 [2024-07-15 09:44:15.052879] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.175 [2024-07-15 09:44:15.052894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:47.175 Running I/O for 1 seconds... 00:17:48.553 00:17:48.553 Latency(us) 00:17:48.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.553 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.553 Malloc0 : 1.01 27565.35 107.68 0.00 0.00 4643.32 220.06 8986.39 00:17:48.553 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.553 Malloc1p0 : 1.01 27562.17 107.66 0.00 0.00 4640.86 241.41 8776.18 00:17:48.553 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.553 Malloc1p1 : 1.01 27559.82 107.66 0.00 0.00 4638.82 238.13 8723.63 00:17:48.554 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p0 : 1.01 27557.69 107.65 0.00 0.00 4637.14 239.77 8513.42 00:17:48.554 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p1 : 1.01 27555.45 107.64 0.00 0.00 4635.03 241.41 8303.21 00:17:48.554 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p2 : 1.01 27550.91 107.62 0.00 0.00 4633.78 239.77 8093.00 00:17:48.554 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p3 : 1.01 27548.75 107.61 0.00 0.00 4631.59 241.41 8093.00 00:17:48.554 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p4 : 1.01 27546.62 107.60 0.00 0.00 4629.15 243.05 7987.90 00:17:48.554 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p5 : 1.01 27544.54 107.60 0.00 0.00 4627.43 246.34 7882.80 00:17:48.554 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p6 : 1.01 27542.06 107.59 0.00 0.00 4626.00 241.41 7777.69 00:17:48.554 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 Malloc2p7 : 1.01 27539.82 107.58 0.00 0.00 4623.86 246.34 7567.49 00:17:48.554 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 TestPT : 1.01 27537.06 107.57 0.00 0.00 4621.68 238.13 7409.83 00:17:48.554 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 raid0 : 1.01 27532.64 107.55 0.00 0.00 4619.90 284.11 7199.62 00:17:48.554 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 concat0 : 1.01 27529.77 107.54 0.00 0.00 4617.75 274.26 7041.97 00:17:48.554 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 raid1 : 1.01 27636.34 107.95 0.00 0.00 4595.86 191.32 6674.10 00:17:48.554 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.554 AIO0 : 1.09 1588.67 6.21 0.00 0.00 77002.83 771.86 209367.09 00:17:48.554 =================================================================================================================== 00:17:48.554 Total : 414897.66 1620.69 0.00 0.00 4927.32 191.32 209367.09 00:17:48.554 00:17:48.554 real 0m2.394s 00:17:48.554 user 0m1.604s 00:17:48.554 sys 0m0.668s 00:17:48.554 09:44:16 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.554 ************************************ 00:17:48.554 09:44:16 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:48.554 END TEST bdev_write_zeroes 00:17:48.554 ************************************ 00:17:48.814 09:44:16 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:17:48.814 09:44:16 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:48.814 09:44:16 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:48.814 09:44:16 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.814 09:44:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:48.814 ************************************ 00:17:48.814 START TEST bdev_json_nonenclosed 00:17:48.814 ************************************ 00:17:48.814 09:44:16 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:48.814 [2024-07-15 09:44:16.688866] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:48.814 [2024-07-15 09:44:16.689191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:49.382 EAL: TSC is not safe to use in SMP mode 00:17:49.382 EAL: TSC is not invariant 00:17:49.382 [2024-07-15 09:44:17.408364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.642 [2024-07-15 09:44:17.523638] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:49.642 [2024-07-15 09:44:17.526156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.642 [2024-07-15 09:44:17.526198] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:49.642 [2024-07-15 09:44:17.526208] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:49.642 [2024-07-15 09:44:17.526215] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:49.642 00:17:49.642 real 0m1.012s 00:17:49.642 user 0m0.258s 00:17:49.642 sys 0m0.754s 00:17:49.642 09:44:17 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:17:49.642 09:44:17 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.642 09:44:17 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:49.642 ************************************ 00:17:49.642 END TEST bdev_json_nonenclosed 00:17:49.642 ************************************ 00:17:49.901 09:44:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:17:49.901 09:44:17 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:17:49.901 09:44:17 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.901 09:44:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:49.901 09:44:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.901 09:44:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:49.901 ************************************ 00:17:49.901 START TEST bdev_json_nonarray 00:17:49.901 ************************************ 00:17:49.901 09:44:17 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.901 [2024-07-15 09:44:17.760891] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:49.901 [2024-07-15 09:44:17.761219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:50.469 EAL: TSC is not safe to use in SMP mode 00:17:50.469 EAL: TSC is not invariant 00:17:50.469 [2024-07-15 09:44:18.462662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.729 [2024-07-15 09:44:18.576233] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:50.729 [2024-07-15 09:44:18.578805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.729 [2024-07-15 09:44:18.578851] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:50.729 [2024-07-15 09:44:18.578860] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:50.729 [2024-07-15 09:44:18.578867] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.729 00:17:50.729 real 0m0.995s 00:17:50.729 user 0m0.238s 00:17:50.729 sys 0m0.755s 00:17:50.729 09:44:18 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:17:50.729 09:44:18 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.729 09:44:18 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:50.729 ************************************ 00:17:50.729 END TEST bdev_json_nonarray 00:17:50.729 ************************************ 00:17:50.729 09:44:18 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:17:50.729 09:44:18 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:17:50.729 09:44:18 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:17:50.729 09:44:18 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:17:50.729 09:44:18 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:50.729 09:44:18 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.729 09:44:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:50.729 ************************************ 00:17:50.729 START TEST bdev_qos 00:17:50.729 ************************************ 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48176 00:17:50.729 Process qos testing pid: 48176 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48176' 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48176 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48176 ']' 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.729 09:44:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:50.729 [2024-07-15 09:44:18.814710] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:50.729 [2024-07-15 09:44:18.815039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:51.666 EAL: TSC is not safe to use in SMP mode 00:17:51.666 EAL: TSC is not invariant 00:17:51.666 [2024-07-15 09:44:19.521210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.666 [2024-07-15 09:44:19.635382] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:51.666 [2024-07-15 09:44:19.637829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:51.666 Malloc_0 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.666 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:51.666 [ 00:17:51.666 { 00:17:51.666 "name": "Malloc_0", 00:17:51.666 "aliases": [ 00:17:51.666 "cefc6a43-428e-11ef-a0af-c98d8ee52a94" 00:17:51.666 ], 00:17:51.666 "product_name": "Malloc disk", 00:17:51.666 "block_size": 512, 00:17:51.666 "num_blocks": 262144, 00:17:51.666 "uuid": "cefc6a43-428e-11ef-a0af-c98d8ee52a94", 00:17:51.666 "assigned_rate_limits": { 00:17:51.666 "rw_ios_per_sec": 0, 00:17:51.666 "rw_mbytes_per_sec": 0, 00:17:51.666 "r_mbytes_per_sec": 0, 00:17:51.666 "w_mbytes_per_sec": 0 00:17:51.666 }, 00:17:51.666 "claimed": false, 00:17:51.666 "zoned": false, 00:17:51.666 "supported_io_types": { 00:17:51.666 "read": true, 00:17:51.666 "write": true, 00:17:51.666 "unmap": true, 00:17:51.666 "flush": true, 00:17:51.666 "reset": true, 00:17:51.666 "nvme_admin": false, 00:17:51.666 "nvme_io": false, 00:17:51.666 "nvme_io_md": false, 00:17:51.666 "write_zeroes": true, 00:17:51.666 "zcopy": true, 00:17:51.666 "get_zone_info": false, 00:17:51.666 "zone_management": false, 00:17:51.666 "zone_append": false, 00:17:51.666 "compare": false, 00:17:51.666 "compare_and_write": false, 00:17:51.666 "abort": true, 00:17:51.666 "seek_hole": false, 00:17:51.666 "seek_data": false, 00:17:51.666 "copy": true, 00:17:51.666 "nvme_iov_md": false 00:17:51.666 }, 00:17:51.666 "memory_domains": [ 00:17:51.666 { 00:17:51.925 "dma_device_id": "system", 00:17:51.925 "dma_device_type": 1 00:17:51.925 }, 00:17:51.925 { 00:17:51.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.925 "dma_device_type": 2 00:17:51.925 } 00:17:51.925 ], 00:17:51.925 "driver_specific": {} 00:17:51.925 } 00:17:51.925 ] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:51.925 Null_1 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:51.925 [ 00:17:51.925 { 00:17:51.925 "name": "Null_1", 00:17:51.925 "aliases": [ 00:17:51.925 "cf02841b-428e-11ef-a0af-c98d8ee52a94" 00:17:51.925 ], 00:17:51.925 "product_name": "Null disk", 00:17:51.925 "block_size": 512, 00:17:51.925 "num_blocks": 262144, 00:17:51.925 "uuid": "cf02841b-428e-11ef-a0af-c98d8ee52a94", 00:17:51.925 "assigned_rate_limits": { 00:17:51.925 "rw_ios_per_sec": 0, 00:17:51.925 "rw_mbytes_per_sec": 0, 00:17:51.925 "r_mbytes_per_sec": 0, 00:17:51.925 "w_mbytes_per_sec": 0 00:17:51.925 }, 00:17:51.925 "claimed": false, 00:17:51.925 "zoned": false, 00:17:51.925 "supported_io_types": { 00:17:51.925 "read": true, 00:17:51.925 "write": true, 00:17:51.925 "unmap": false, 00:17:51.925 "flush": false, 00:17:51.925 "reset": true, 00:17:51.925 "nvme_admin": false, 00:17:51.925 "nvme_io": false, 00:17:51.925 "nvme_io_md": false, 00:17:51.925 "write_zeroes": true, 00:17:51.925 "zcopy": false, 00:17:51.925 "get_zone_info": false, 00:17:51.925 "zone_management": false, 00:17:51.925 "zone_append": false, 00:17:51.925 "compare": false, 00:17:51.925 "compare_and_write": false, 00:17:51.925 "abort": true, 00:17:51.925 "seek_hole": false, 00:17:51.925 "seek_data": false, 00:17:51.925 "copy": false, 00:17:51.925 "nvme_iov_md": false 00:17:51.925 }, 00:17:51.925 "driver_specific": {} 00:17:51.925 } 00:17:51.925 ] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:17:51.925 09:44:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:17:51.925 Running I/O for 60 seconds... 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 644873.37 2579493.48 0.00 0.00 2759680.00 0.00 0.00 ' 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=644873.37 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 644873 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=644873 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=161000 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 161000 -gt 1000 ']' 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 161000 Malloc_0 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.225 09:44:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:57.484 09:44:25 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.484 09:44:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 161000 IOPS Malloc_0 00:17:57.484 09:44:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:57.484 09:44:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.484 09:44:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:17:57.484 ************************************ 00:17:57.484 START TEST bdev_qos_iops 00:17:57.484 ************************************ 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 161000 IOPS Malloc_0 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=161000 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:17:57.484 09:44:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 161001.84 644007.36 0.00 0.00 694232.00 0.00 0.00 ' 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=161001.84 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 161001 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=161001 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=144900 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=177100 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 161001 -lt 144900 ']' 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 161001 -gt 177100 ']' 00:18:02.753 00:18:02.753 real 0m5.489s 00:18:02.753 user 0m0.110s 00:18:02.753 sys 0m0.032s 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.753 09:44:30 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 ************************************ 00:18:02.753 END TEST bdev_qos_iops 00:18:02.753 ************************************ 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:18:03.013 09:44:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 734649.64 2938598.55 0.00 0.00 3095552.00 0.00 0.00 ' 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=3095552.00 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 3095552 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=3095552 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=302 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 302 -lt 2 ']' 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 302 Null_1 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 302 BANDWIDTH Null_1 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.371 09:44:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.371 ************************************ 00:18:08.371 START TEST bdev_qos_bw 00:18:08.371 ************************************ 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 302 BANDWIDTH Null_1 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=302 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:08.371 09:44:36 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 77302.14 309208.56 0.00 0.00 312032.00 0.00 0.00 ' 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=312032.00 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 312032 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=312032 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=309248 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=278323 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=340172 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 312032 -lt 278323 ']' 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 312032 -gt 340172 ']' 00:18:14.938 00:18:14.938 real 0m5.437s 00:18:14.938 user 0m0.102s 00:18:14.938 sys 0m0.032s 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:18:14.938 ************************************ 00:18:14.938 END TEST bdev_qos_bw 00:18:14.938 ************************************ 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.938 09:44:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:14.938 ************************************ 00:18:14.938 START TEST bdev_qos_ro_bw 00:18:14.938 ************************************ 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:18:14.938 09:44:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.22 2048.87 0.00 0.00 2164.00 0.00 0.00 ' 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2164.00 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2164 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2164 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2164 -lt 1843 ']' 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2164 -gt 2252 ']' 00:18:20.228 00:18:20.228 real 0m5.466s 00:18:20.228 user 0m0.108s 00:18:20.228 sys 0m0.025s 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.228 09:44:47 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:18:20.228 ************************************ 00:18:20.228 END TEST bdev_qos_ro_bw 00:18:20.228 ************************************ 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:20.228 00:18:20.228 Latency(us) 00:18:20.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.228 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:20.228 Malloc_0 : 27.97 222908.31 870.74 0.00 0.00 1137.78 343.23 501135.68 00:18:20.228 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:20.228 Null_1 : 28.02 446333.83 1743.49 0.00 0.00 573.36 56.25 36155.76 00:18:20.228 =================================================================================================================== 00:18:20.228 Total : 669242.14 2614.23 0.00 0.00 761.17 56.25 501135.68 00:18:20.228 0 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48176 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48176 ']' 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48176 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48176 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:18:20.228 killing process with pid 48176 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48176' 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48176 00:18:20.228 Received shutdown signal, test time was about 28.035058 seconds 00:18:20.228 00:18:20.228 Latency(us) 00:18:20.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.228 =================================================================================================================== 00:18:20.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.228 09:44:47 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48176 00:18:20.228 09:44:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:18:20.228 00:18:20.228 real 0m29.380s 00:18:20.228 user 0m29.665s 00:18:20.228 sys 0m1.097s 00:18:20.228 09:44:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.228 09:44:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:20.228 ************************************ 00:18:20.228 END TEST bdev_qos 00:18:20.228 ************************************ 00:18:20.228 09:44:48 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:18:20.228 09:44:48 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:18:20.228 09:44:48 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:20.228 09:44:48 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.228 09:44:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:20.228 ************************************ 00:18:20.228 START TEST bdev_qd_sampling 00:18:20.228 ************************************ 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48397 00:18:20.228 Process bdev QD sampling period testing pid: 48397 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48397' 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48397 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48397 ']' 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.228 09:44:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:20.228 [2024-07-15 09:44:48.248335] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:20.228 [2024-07-15 09:44:48.248571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:21.164 EAL: TSC is not safe to use in SMP mode 00:18:21.164 EAL: TSC is not invariant 00:18:21.164 [2024-07-15 09:44:48.951208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:21.164 [2024-07-15 09:44:49.073810] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:21.164 [2024-07-15 09:44:49.073878] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:21.164 [2024-07-15 09:44:49.077344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.164 [2024-07-15 09:44:49.077334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:21.164 Malloc_QD 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.164 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:21.165 [ 00:18:21.165 { 00:18:21.165 "name": "Malloc_QD", 00:18:21.165 "aliases": [ 00:18:21.165 "e08e8dde-428e-11ef-a0af-c98d8ee52a94" 00:18:21.165 ], 00:18:21.165 "product_name": "Malloc disk", 00:18:21.165 "block_size": 512, 00:18:21.165 "num_blocks": 262144, 00:18:21.165 "uuid": "e08e8dde-428e-11ef-a0af-c98d8ee52a94", 00:18:21.165 "assigned_rate_limits": { 00:18:21.165 "rw_ios_per_sec": 0, 00:18:21.165 "rw_mbytes_per_sec": 0, 00:18:21.165 "r_mbytes_per_sec": 0, 00:18:21.165 "w_mbytes_per_sec": 0 00:18:21.165 }, 00:18:21.165 "claimed": false, 00:18:21.165 "zoned": false, 00:18:21.165 "supported_io_types": { 00:18:21.165 "read": true, 00:18:21.165 "write": true, 00:18:21.165 "unmap": true, 00:18:21.165 "flush": true, 00:18:21.165 "reset": true, 00:18:21.165 "nvme_admin": false, 00:18:21.165 "nvme_io": false, 00:18:21.165 "nvme_io_md": false, 00:18:21.165 "write_zeroes": true, 00:18:21.165 "zcopy": true, 00:18:21.165 "get_zone_info": false, 00:18:21.165 "zone_management": false, 00:18:21.165 "zone_append": false, 00:18:21.165 "compare": false, 00:18:21.165 "compare_and_write": false, 00:18:21.165 "abort": true, 00:18:21.165 "seek_hole": false, 00:18:21.165 "seek_data": false, 00:18:21.165 "copy": true, 00:18:21.165 "nvme_iov_md": false 00:18:21.165 }, 00:18:21.165 "memory_domains": [ 00:18:21.165 { 00:18:21.165 "dma_device_id": "system", 00:18:21.165 "dma_device_type": 1 00:18:21.165 }, 00:18:21.165 { 00:18:21.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.165 "dma_device_type": 2 00:18:21.165 } 00:18:21.165 ], 00:18:21.165 "driver_specific": {} 00:18:21.165 } 00:18:21.165 ] 00:18:21.165 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.165 09:44:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:18:21.165 09:44:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:18:21.165 09:44:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:21.424 Running I/O for 5 seconds... 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:18:23.382 "tick_rate": 2494140116, 00:18:23.382 "ticks": 893405975604, 00:18:23.382 "bdevs": [ 00:18:23.382 { 00:18:23.382 "name": "Malloc_QD", 00:18:23.382 "bytes_read": 12896473600, 00:18:23.382 "num_read_ops": 3148547, 00:18:23.382 "bytes_written": 0, 00:18:23.382 "num_write_ops": 0, 00:18:23.382 "bytes_unmapped": 0, 00:18:23.382 "num_unmap_ops": 0, 00:18:23.382 "bytes_copied": 0, 00:18:23.382 "num_copy_ops": 0, 00:18:23.382 "read_latency_ticks": 2482691954218, 00:18:23.382 "max_read_latency_ticks": 1719216, 00:18:23.382 "min_read_latency_ticks": 49648, 00:18:23.382 "write_latency_ticks": 0, 00:18:23.382 "max_write_latency_ticks": 0, 00:18:23.382 "min_write_latency_ticks": 0, 00:18:23.382 "unmap_latency_ticks": 0, 00:18:23.382 "max_unmap_latency_ticks": 0, 00:18:23.382 "min_unmap_latency_ticks": 0, 00:18:23.382 "copy_latency_ticks": 0, 00:18:23.382 "max_copy_latency_ticks": 0, 00:18:23.382 "min_copy_latency_ticks": 0, 00:18:23.382 "io_error": {}, 00:18:23.382 "queue_depth_polling_period": 10, 00:18:23.382 "queue_depth": 512, 00:18:23.382 "io_time": 380, 00:18:23.382 "weighted_io_time": 194560 00:18:23.382 } 00:18:23.382 ] 00:18:23.382 }' 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.382 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:23.382 00:18:23.382 Latency(us) 00:18:23.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.382 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:18:23.382 Malloc_QD : 1.98 815307.67 3184.80 0.00 0.00 313.75 52.35 610.92 00:18:23.383 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:23.383 Malloc_QD : 1.98 801225.19 3129.79 0.00 0.00 319.24 60.35 689.74 00:18:23.383 =================================================================================================================== 00:18:23.383 Total : 1616532.86 6314.58 0.00 0.00 316.47 52.35 689.74 00:18:23.383 0 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48397 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48397 ']' 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48397 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48397 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:18:23.383 killing process with pid 48397 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48397' 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48397 00:18:23.383 Received shutdown signal, test time was about 2.024943 seconds 00:18:23.383 00:18:23.383 Latency(us) 00:18:23.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.383 =================================================================================================================== 00:18:23.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.383 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48397 00:18:23.642 09:44:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:18:23.642 00:18:23.642 real 0m3.392s 00:18:23.642 user 0m5.524s 00:18:23.642 sys 0m0.830s 00:18:23.642 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.642 09:44:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 ************************************ 00:18:23.642 END TEST bdev_qd_sampling 00:18:23.642 ************************************ 00:18:23.642 09:44:51 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:18:23.642 09:44:51 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:18:23.642 09:44:51 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:23.642 09:44:51 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.642 09:44:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 ************************************ 00:18:23.642 START TEST bdev_error 00:18:23.642 ************************************ 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48444 00:18:23.642 Process error testing pid: 48444 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48444' 00:18:23.642 09:44:51 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48444 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48444 ']' 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.642 09:44:51 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 [2024-07-15 09:44:51.678339] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:23.642 [2024-07-15 09:44:51.678529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:24.579 EAL: TSC is not safe to use in SMP mode 00:18:24.579 EAL: TSC is not invariant 00:18:24.579 [2024-07-15 09:44:52.408071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.579 [2024-07-15 09:44:52.520886] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:24.579 [2024-07-15 09:44:52.523264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:18:24.838 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 Dev_1 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 [ 00:18:24.838 { 00:18:24.838 "name": "Dev_1", 00:18:24.838 "aliases": [ 00:18:24.838 "e2a4f947-428e-11ef-a0af-c98d8ee52a94" 00:18:24.838 ], 00:18:24.838 "product_name": "Malloc disk", 00:18:24.838 "block_size": 512, 00:18:24.838 "num_blocks": 262144, 00:18:24.838 "uuid": "e2a4f947-428e-11ef-a0af-c98d8ee52a94", 00:18:24.838 "assigned_rate_limits": { 00:18:24.838 "rw_ios_per_sec": 0, 00:18:24.838 "rw_mbytes_per_sec": 0, 00:18:24.838 "r_mbytes_per_sec": 0, 00:18:24.838 "w_mbytes_per_sec": 0 00:18:24.838 }, 00:18:24.838 "claimed": false, 00:18:24.838 "zoned": false, 00:18:24.838 "supported_io_types": { 00:18:24.838 "read": true, 00:18:24.838 "write": true, 00:18:24.838 "unmap": true, 00:18:24.838 "flush": true, 00:18:24.838 "reset": true, 00:18:24.838 "nvme_admin": false, 00:18:24.838 "nvme_io": false, 00:18:24.838 "nvme_io_md": false, 00:18:24.838 "write_zeroes": true, 00:18:24.838 "zcopy": true, 00:18:24.838 "get_zone_info": false, 00:18:24.838 "zone_management": false, 00:18:24.838 "zone_append": false, 00:18:24.838 "compare": false, 00:18:24.838 "compare_and_write": false, 00:18:24.838 "abort": true, 00:18:24.838 "seek_hole": false, 00:18:24.838 "seek_data": false, 00:18:24.838 "copy": true, 00:18:24.838 "nvme_iov_md": false 00:18:24.838 }, 00:18:24.838 "memory_domains": [ 00:18:24.838 { 00:18:24.838 "dma_device_id": "system", 00:18:24.838 "dma_device_type": 1 00:18:24.838 }, 00:18:24.838 { 00:18:24.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.838 "dma_device_type": 2 00:18:24.838 } 00:18:24.838 ], 00:18:24.838 "driver_specific": {} 00:18:24.838 } 00:18:24.838 ] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:18:24.838 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 true 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 Dev_2 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.838 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 [ 00:18:24.838 { 00:18:24.838 "name": "Dev_2", 00:18:24.838 "aliases": [ 00:18:24.838 "e2ab1359-428e-11ef-a0af-c98d8ee52a94" 00:18:24.838 ], 00:18:24.838 "product_name": "Malloc disk", 00:18:24.838 "block_size": 512, 00:18:24.838 "num_blocks": 262144, 00:18:24.838 "uuid": "e2ab1359-428e-11ef-a0af-c98d8ee52a94", 00:18:24.838 "assigned_rate_limits": { 00:18:24.838 "rw_ios_per_sec": 0, 00:18:24.838 "rw_mbytes_per_sec": 0, 00:18:24.838 "r_mbytes_per_sec": 0, 00:18:24.838 "w_mbytes_per_sec": 0 00:18:24.838 }, 00:18:24.838 "claimed": false, 00:18:24.838 "zoned": false, 00:18:24.838 "supported_io_types": { 00:18:24.838 "read": true, 00:18:24.838 "write": true, 00:18:24.838 "unmap": true, 00:18:24.838 "flush": true, 00:18:24.838 "reset": true, 00:18:24.838 "nvme_admin": false, 00:18:24.838 "nvme_io": false, 00:18:24.838 "nvme_io_md": false, 00:18:24.838 "write_zeroes": true, 00:18:24.839 "zcopy": true, 00:18:24.839 "get_zone_info": false, 00:18:24.839 "zone_management": false, 00:18:24.839 "zone_append": false, 00:18:24.839 "compare": false, 00:18:24.839 "compare_and_write": false, 00:18:24.839 "abort": true, 00:18:24.839 "seek_hole": false, 00:18:24.839 "seek_data": false, 00:18:24.839 "copy": true, 00:18:24.839 "nvme_iov_md": false 00:18:24.839 }, 00:18:24.839 "memory_domains": [ 00:18:24.839 { 00:18:24.839 "dma_device_id": "system", 00:18:24.839 "dma_device_type": 1 00:18:24.839 }, 00:18:24.839 { 00:18:24.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.839 "dma_device_type": 2 00:18:24.839 } 00:18:24.839 ], 00:18:24.839 "driver_specific": {} 00:18:24.839 } 00:18:24.839 ] 00:18:24.839 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.839 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:18:24.839 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:18:24.839 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.839 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.839 09:44:52 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.839 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:18:24.839 09:44:52 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:18:24.839 Running I/O for 5 seconds... 00:18:26.215 09:44:53 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48444 00:18:26.215 Process is existed as continue on error is set. Pid: 48444 00:18:26.215 09:44:53 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48444' 00:18:26.215 09:44:53 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:18:26.215 09:44:53 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.215 09:44:53 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.215 09:44:53 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.215 09:44:53 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:18:26.215 09:44:53 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.215 09:44:53 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.215 09:44:53 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.215 09:44:53 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:18:26.215 Timeout while waiting for response: 00:18:26.215 00:18:26.215 00:18:30.429 00:18:30.429 Latency(us) 00:18:30.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.429 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:30.429 EE_Dev_1 : 0.98 370038.37 1445.46 5.10 0.00 43.05 22.58 293.96 00:18:30.430 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:30.430 Dev_2 : 5.00 746054.14 2914.27 0.00 0.00 21.26 6.36 32582.23 00:18:30.430 =================================================================================================================== 00:18:30.430 Total : 1116092.51 4359.74 5.10 0.00 23.19 6.36 32582.23 00:18:30.997 09:44:59 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48444 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48444 ']' 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48444 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48444 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:18:30.997 killing process with pid 48444 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48444' 00:18:30.997 Received shutdown signal, test time was about 5.000000 seconds 00:18:30.997 00:18:30.997 Latency(us) 00:18:30.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.997 =================================================================================================================== 00:18:30.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48444 00:18:30.997 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48444 00:18:31.255 09:44:59 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:18:31.255 Process error testing pid: 48484 00:18:31.255 09:44:59 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48484 00:18:31.255 09:44:59 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48484' 00:18:31.255 09:44:59 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48484 00:18:31.255 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48484 ']' 00:18:31.255 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.255 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.255 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.255 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.255 09:44:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.255 [2024-07-15 09:44:59.306113] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:31.255 [2024-07-15 09:44:59.306288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:32.194 EAL: TSC is not safe to use in SMP mode 00:18:32.194 EAL: TSC is not invariant 00:18:32.194 [2024-07-15 09:45:00.025025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.194 [2024-07-15 09:45:00.135240] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:32.194 [2024-07-15 09:45:00.137727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:18:32.453 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.453 Dev_1 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.453 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:32.453 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 [ 00:18:32.454 { 00:18:32.454 "name": "Dev_1", 00:18:32.454 "aliases": [ 00:18:32.454 "e72a1c97-428e-11ef-a0af-c98d8ee52a94" 00:18:32.454 ], 00:18:32.454 "product_name": "Malloc disk", 00:18:32.454 "block_size": 512, 00:18:32.454 "num_blocks": 262144, 00:18:32.454 "uuid": "e72a1c97-428e-11ef-a0af-c98d8ee52a94", 00:18:32.454 "assigned_rate_limits": { 00:18:32.454 "rw_ios_per_sec": 0, 00:18:32.454 "rw_mbytes_per_sec": 0, 00:18:32.454 "r_mbytes_per_sec": 0, 00:18:32.454 "w_mbytes_per_sec": 0 00:18:32.454 }, 00:18:32.454 "claimed": false, 00:18:32.454 "zoned": false, 00:18:32.454 "supported_io_types": { 00:18:32.454 "read": true, 00:18:32.454 "write": true, 00:18:32.454 "unmap": true, 00:18:32.454 "flush": true, 00:18:32.454 "reset": true, 00:18:32.454 "nvme_admin": false, 00:18:32.454 "nvme_io": false, 00:18:32.454 "nvme_io_md": false, 00:18:32.454 "write_zeroes": true, 00:18:32.454 "zcopy": true, 00:18:32.454 "get_zone_info": false, 00:18:32.454 "zone_management": false, 00:18:32.454 "zone_append": false, 00:18:32.454 "compare": false, 00:18:32.454 "compare_and_write": false, 00:18:32.454 "abort": true, 00:18:32.454 "seek_hole": false, 00:18:32.454 "seek_data": false, 00:18:32.454 "copy": true, 00:18:32.454 "nvme_iov_md": false 00:18:32.454 }, 00:18:32.454 "memory_domains": [ 00:18:32.454 { 00:18:32.454 "dma_device_id": "system", 00:18:32.454 "dma_device_type": 1 00:18:32.454 }, 00:18:32.454 { 00:18:32.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.454 "dma_device_type": 2 00:18:32.454 } 00:18:32.454 ], 00:18:32.454 "driver_specific": {} 00:18:32.454 } 00:18:32.454 ] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:18:32.454 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 true 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 Dev_2 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 [ 00:18:32.454 { 00:18:32.454 "name": "Dev_2", 00:18:32.454 "aliases": [ 00:18:32.454 "e732a7fc-428e-11ef-a0af-c98d8ee52a94" 00:18:32.454 ], 00:18:32.454 "product_name": "Malloc disk", 00:18:32.454 "block_size": 512, 00:18:32.454 "num_blocks": 262144, 00:18:32.454 "uuid": "e732a7fc-428e-11ef-a0af-c98d8ee52a94", 00:18:32.454 "assigned_rate_limits": { 00:18:32.454 "rw_ios_per_sec": 0, 00:18:32.454 "rw_mbytes_per_sec": 0, 00:18:32.454 "r_mbytes_per_sec": 0, 00:18:32.454 "w_mbytes_per_sec": 0 00:18:32.454 }, 00:18:32.454 "claimed": false, 00:18:32.454 "zoned": false, 00:18:32.454 "supported_io_types": { 00:18:32.454 "read": true, 00:18:32.454 "write": true, 00:18:32.454 "unmap": true, 00:18:32.454 "flush": true, 00:18:32.454 "reset": true, 00:18:32.454 "nvme_admin": false, 00:18:32.454 "nvme_io": false, 00:18:32.454 "nvme_io_md": false, 00:18:32.454 "write_zeroes": true, 00:18:32.454 "zcopy": true, 00:18:32.454 "get_zone_info": false, 00:18:32.454 "zone_management": false, 00:18:32.454 "zone_append": false, 00:18:32.454 "compare": false, 00:18:32.454 "compare_and_write": false, 00:18:32.454 "abort": true, 00:18:32.454 "seek_hole": false, 00:18:32.454 "seek_data": false, 00:18:32.454 "copy": true, 00:18:32.454 "nvme_iov_md": false 00:18:32.454 }, 00:18:32.454 "memory_domains": [ 00:18:32.454 { 00:18:32.454 "dma_device_id": "system", 00:18:32.454 "dma_device_type": 1 00:18:32.454 }, 00:18:32.454 { 00:18:32.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.454 "dma_device_type": 2 00:18:32.454 } 00:18:32.454 ], 00:18:32.454 "driver_specific": {} 00:18:32.454 } 00:18:32.454 ] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:18:32.454 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.454 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48484 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48484 00:18:32.454 09:45:00 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.454 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48484 00:18:32.454 Running I/O for 5 seconds... 00:18:32.454 task offset: 120400 on job bdev=EE_Dev_1 fails 00:18:32.454 00:18:32.454 Latency(us) 00:18:32.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.454 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:32.454 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:18:32.454 EE_Dev_1 : 0.00 160583.94 627.28 36496.35 0.00 67.13 22.58 127.27 00:18:32.454 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:32.454 Dev_2 : 0.00 200000.00 781.25 0.00 0.00 35.05 23.30 46.80 00:18:32.454 =================================================================================================================== 00:18:32.454 Total : 360583.94 1408.53 36496.35 0.00 49.73 22.58 127.27 00:18:32.454 [2024-07-15 09:45:00.488313] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:32.454 request: 00:18:32.454 { 00:18:32.454 "method": "perform_tests", 00:18:32.454 "req_id": 1 00:18:32.454 } 00:18:32.454 Got JSON-RPC error response 00:18:32.454 response: 00:18:32.455 { 00:18:32.455 "code": -32603, 00:18:32.455 "message": "bdevperf failed with error Operation not permitted" 00:18:32.455 } 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.023 00:18:33.023 real 0m9.156s 00:18:33.023 user 0m8.670s 00:18:33.023 sys 0m1.722s 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.023 ************************************ 00:18:33.023 END TEST bdev_error 00:18:33.023 ************************************ 00:18:33.023 09:45:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.023 09:45:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:18:33.023 09:45:00 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:18:33.023 09:45:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:33.023 09:45:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.023 09:45:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:33.023 ************************************ 00:18:33.023 START TEST bdev_stat 00:18:33.023 ************************************ 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48511 00:18:33.023 Process Bdev IO statistics testing pid: 48511 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48511' 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48511 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48511 ']' 00:18:33.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.023 09:45:00 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:33.023 [2024-07-15 09:45:00.890792] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:33.023 [2024-07-15 09:45:00.891133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:33.592 EAL: TSC is not safe to use in SMP mode 00:18:33.592 EAL: TSC is not invariant 00:18:33.592 [2024-07-15 09:45:01.616421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:33.852 [2024-07-15 09:45:01.733557] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:33.852 [2024-07-15 09:45:01.733625] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:33.852 [2024-07-15 09:45:01.823791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.852 [2024-07-15 09:45:01.823719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:34.791 Malloc_STAT 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.791 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:34.791 [ 00:18:34.791 { 00:18:34.791 "name": "Malloc_STAT", 00:18:34.791 "aliases": [ 00:18:34.791 "e886bfc0-428e-11ef-a0af-c98d8ee52a94" 00:18:34.791 ], 00:18:34.791 "product_name": "Malloc disk", 00:18:34.791 "block_size": 512, 00:18:34.791 "num_blocks": 262144, 00:18:34.791 "uuid": "e886bfc0-428e-11ef-a0af-c98d8ee52a94", 00:18:34.791 "assigned_rate_limits": { 00:18:34.791 "rw_ios_per_sec": 0, 00:18:34.791 "rw_mbytes_per_sec": 0, 00:18:34.792 "r_mbytes_per_sec": 0, 00:18:34.792 "w_mbytes_per_sec": 0 00:18:34.792 }, 00:18:34.792 "claimed": false, 00:18:34.792 "zoned": false, 00:18:34.792 "supported_io_types": { 00:18:34.792 "read": true, 00:18:34.792 "write": true, 00:18:34.792 "unmap": true, 00:18:34.792 "flush": true, 00:18:34.792 "reset": true, 00:18:34.792 "nvme_admin": false, 00:18:34.792 "nvme_io": false, 00:18:34.792 "nvme_io_md": false, 00:18:34.792 "write_zeroes": true, 00:18:34.792 "zcopy": true, 00:18:34.792 "get_zone_info": false, 00:18:34.792 "zone_management": false, 00:18:34.792 "zone_append": false, 00:18:34.792 "compare": false, 00:18:34.792 "compare_and_write": false, 00:18:34.792 "abort": true, 00:18:34.792 "seek_hole": false, 00:18:34.792 "seek_data": false, 00:18:34.792 "copy": true, 00:18:34.792 "nvme_iov_md": false 00:18:34.792 }, 00:18:34.792 "memory_domains": [ 00:18:34.792 { 00:18:34.792 "dma_device_id": "system", 00:18:34.792 "dma_device_type": 1 00:18:34.792 }, 00:18:34.792 { 00:18:34.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.792 "dma_device_type": 2 00:18:34.792 } 00:18:34.792 ], 00:18:34.792 "driver_specific": {} 00:18:34.792 } 00:18:34.792 ] 00:18:34.792 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.792 09:45:02 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:18:34.792 09:45:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:34.792 09:45:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:18:34.792 Running I/O for 10 seconds... 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:18:36.696 "tick_rate": 2494140116, 00:18:36.696 "ticks": 927020997050, 00:18:36.696 "bdevs": [ 00:18:36.696 { 00:18:36.696 "name": "Malloc_STAT", 00:18:36.696 "bytes_read": 13444878848, 00:18:36.696 "num_read_ops": 3282435, 00:18:36.696 "bytes_written": 0, 00:18:36.696 "num_write_ops": 0, 00:18:36.696 "bytes_unmapped": 0, 00:18:36.696 "num_unmap_ops": 0, 00:18:36.696 "bytes_copied": 0, 00:18:36.696 "num_copy_ops": 0, 00:18:36.696 "read_latency_ticks": 2629637345700, 00:18:36.696 "max_read_latency_ticks": 1466510, 00:18:36.696 "min_read_latency_ticks": 79594, 00:18:36.696 "write_latency_ticks": 0, 00:18:36.696 "max_write_latency_ticks": 0, 00:18:36.696 "min_write_latency_ticks": 0, 00:18:36.696 "unmap_latency_ticks": 0, 00:18:36.696 "max_unmap_latency_ticks": 0, 00:18:36.696 "min_unmap_latency_ticks": 0, 00:18:36.696 "copy_latency_ticks": 0, 00:18:36.696 "max_copy_latency_ticks": 0, 00:18:36.696 "min_copy_latency_ticks": 0, 00:18:36.696 "io_error": {} 00:18:36.696 } 00:18:36.696 ] 00:18:36.696 }' 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3282435 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.696 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:18:36.956 "tick_rate": 2494140116, 00:18:36.956 "ticks": 927103312708, 00:18:36.956 "name": "Malloc_STAT", 00:18:36.956 "channels": [ 00:18:36.956 { 00:18:36.956 "thread_id": 2, 00:18:36.956 "bytes_read": 6733955072, 00:18:36.956 "num_read_ops": 1644032, 00:18:36.956 "bytes_written": 0, 00:18:36.956 "num_write_ops": 0, 00:18:36.956 "bytes_unmapped": 0, 00:18:36.956 "num_unmap_ops": 0, 00:18:36.956 "bytes_copied": 0, 00:18:36.956 "num_copy_ops": 0, 00:18:36.956 "read_latency_ticks": 1335824996946, 00:18:36.956 "max_read_latency_ticks": 1428792, 00:18:36.956 "min_read_latency_ticks": 728898, 00:18:36.956 "write_latency_ticks": 0, 00:18:36.956 "max_write_latency_ticks": 0, 00:18:36.956 "min_write_latency_ticks": 0, 00:18:36.956 "unmap_latency_ticks": 0, 00:18:36.956 "max_unmap_latency_ticks": 0, 00:18:36.956 "min_unmap_latency_ticks": 0, 00:18:36.956 "copy_latency_ticks": 0, 00:18:36.956 "max_copy_latency_ticks": 0, 00:18:36.956 "min_copy_latency_ticks": 0 00:18:36.956 }, 00:18:36.956 { 00:18:36.956 "thread_id": 3, 00:18:36.956 "bytes_read": 6931087360, 00:18:36.956 "num_read_ops": 1692160, 00:18:36.956 "bytes_written": 0, 00:18:36.956 "num_write_ops": 0, 00:18:36.956 "bytes_unmapped": 0, 00:18:36.956 "num_unmap_ops": 0, 00:18:36.956 "bytes_copied": 0, 00:18:36.956 "num_copy_ops": 0, 00:18:36.956 "read_latency_ticks": 1336050446682, 00:18:36.956 "max_read_latency_ticks": 1466510, 00:18:36.956 "min_read_latency_ticks": 705888, 00:18:36.956 "write_latency_ticks": 0, 00:18:36.956 "max_write_latency_ticks": 0, 00:18:36.956 "min_write_latency_ticks": 0, 00:18:36.956 "unmap_latency_ticks": 0, 00:18:36.956 "max_unmap_latency_ticks": 0, 00:18:36.956 "min_unmap_latency_ticks": 0, 00:18:36.956 "copy_latency_ticks": 0, 00:18:36.956 "max_copy_latency_ticks": 0, 00:18:36.956 "min_copy_latency_ticks": 0 00:18:36.956 } 00:18:36.956 ] 00:18:36.956 }' 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1644032 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1644032 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1692160 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3336192 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.956 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:18:36.956 "tick_rate": 2494140116, 00:18:36.956 "ticks": 927228039576, 00:18:36.956 "bdevs": [ 00:18:36.956 { 00:18:36.956 "name": "Malloc_STAT", 00:18:36.956 "bytes_read": 13998526976, 00:18:36.956 "num_read_ops": 3417603, 00:18:36.956 "bytes_written": 0, 00:18:36.956 "num_write_ops": 0, 00:18:36.956 "bytes_unmapped": 0, 00:18:36.956 "num_unmap_ops": 0, 00:18:36.957 "bytes_copied": 0, 00:18:36.957 "num_copy_ops": 0, 00:18:36.957 "read_latency_ticks": 2735629306090, 00:18:36.957 "max_read_latency_ticks": 1466510, 00:18:36.957 "min_read_latency_ticks": 79594, 00:18:36.957 "write_latency_ticks": 0, 00:18:36.957 "max_write_latency_ticks": 0, 00:18:36.957 "min_write_latency_ticks": 0, 00:18:36.957 "unmap_latency_ticks": 0, 00:18:36.957 "max_unmap_latency_ticks": 0, 00:18:36.957 "min_unmap_latency_ticks": 0, 00:18:36.957 "copy_latency_ticks": 0, 00:18:36.957 "max_copy_latency_ticks": 0, 00:18:36.957 "min_copy_latency_ticks": 0, 00:18:36.957 "io_error": {} 00:18:36.957 } 00:18:36.957 ] 00:18:36.957 }' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3417603 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3336192 -lt 3282435 ']' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3336192 -gt 3417603 ']' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:36.957 00:18:36.957 Latency(us) 00:18:36.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.957 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:18:36.957 Malloc_STAT : 2.17 785814.02 3069.59 0.00 0.00 325.53 51.94 574.79 00:18:36.957 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:36.957 Malloc_STAT : 2.17 808606.38 3158.62 0.00 0.00 316.35 56.66 591.21 00:18:36.957 =================================================================================================================== 00:18:36.957 Total : 1594420.41 6228.20 0.00 0.00 320.87 51.94 591.21 00:18:36.957 0 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48511 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48511 ']' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48511 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48511 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:18:36.957 killing process with pid 48511 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48511' 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48511 00:18:36.957 Received shutdown signal, test time was about 2.223790 seconds 00:18:36.957 00:18:36.957 Latency(us) 00:18:36.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.957 =================================================================================================================== 00:18:36.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.957 09:45:04 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48511 00:18:37.218 09:45:05 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:18:37.218 00:18:37.218 real 0m4.299s 00:18:37.218 user 0m7.353s 00:18:37.219 sys 0m0.927s 00:18:37.219 09:45:05 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.219 09:45:05 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:37.219 ************************************ 00:18:37.219 END TEST bdev_stat 00:18:37.219 ************************************ 00:18:37.219 09:45:05 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:18:37.219 09:45:05 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:18:37.219 00:18:37.219 real 1m35.693s 00:18:37.219 user 4m29.207s 00:18:37.219 sys 0m29.154s 00:18:37.219 09:45:05 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.219 09:45:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:37.219 ************************************ 00:18:37.219 END TEST blockdev_general 00:18:37.219 ************************************ 00:18:37.219 09:45:05 -- common/autotest_common.sh@1142 -- # return 0 00:18:37.219 09:45:05 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:37.219 09:45:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:37.219 09:45:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.219 09:45:05 -- common/autotest_common.sh@10 -- # set +x 00:18:37.219 ************************************ 00:18:37.219 START TEST bdev_raid 00:18:37.219 ************************************ 00:18:37.219 09:45:05 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:37.478 * Looking for test storage... 00:18:37.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:37.478 09:45:05 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:18:37.478 09:45:05 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:18:37.478 09:45:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:37.478 09:45:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.478 09:45:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.478 ************************************ 00:18:37.478 START TEST raid0_resize_test 00:18:37.478 ************************************ 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48622 00:18:37.478 Process raid pid: 48622 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48622' 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48622 /var/tmp/spdk-raid.sock 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48622 ']' 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:37.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.478 09:45:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.478 [2024-07-15 09:45:05.486137] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:37.478 [2024-07-15 09:45:05.486446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:38.413 EAL: TSC is not safe to use in SMP mode 00:18:38.413 EAL: TSC is not invariant 00:18:38.413 [2024-07-15 09:45:06.210702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.413 [2024-07-15 09:45:06.328542] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:38.413 [2024-07-15 09:45:06.331241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.413 [2024-07-15 09:45:06.332123] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.413 [2024-07-15 09:45:06.332141] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.671 09:45:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.672 09:45:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:18:38.672 09:45:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:18:38.672 Base_1 00:18:38.672 09:45:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:18:38.930 Base_2 00:18:38.930 09:45:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:18:39.190 [2024-07-15 09:45:07.137074] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:39.190 [2024-07-15 09:45:07.137759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:39.190 [2024-07-15 09:45:07.137790] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9f90b434a00 00:18:39.190 [2024-07-15 09:45:07.137794] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:39.190 [2024-07-15 09:45:07.137836] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9f90b497e20 00:18:39.190 [2024-07-15 09:45:07.137911] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9f90b434a00 00:18:39.190 [2024-07-15 09:45:07.137915] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x9f90b434a00 00:18:39.190 [2024-07-15 09:45:07.137952] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.190 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:18:39.449 [2024-07-15 09:45:07.337074] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:39.449 [2024-07-15 09:45:07.337108] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:18:39.449 true 00:18:39.449 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:18:39.449 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:18:39.708 [2024-07-15 09:45:07.541095] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.708 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:18:39.708 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:18:39.708 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:18:39.708 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:18:39.708 [2024-07-15 09:45:07.765076] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:39.708 [2024-07-15 09:45:07.765118] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:18:39.708 [2024-07-15 09:45:07.765158] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:18:39.708 true 00:18:39.970 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:18:39.970 09:45:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:18:39.970 [2024-07-15 09:45:08.005095] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48622 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48622 ']' 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48622 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48622 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:39.970 killing process with pid 48622 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48622' 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48622 00:18:39.970 [2024-07-15 09:45:08.040153] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.970 [2024-07-15 09:45:08.040188] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.970 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48622 00:18:39.970 [2024-07-15 09:45:08.040201] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.970 [2024-07-15 09:45:08.040205] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9f90b434a00 name Raid, state offline 00:18:39.970 [2024-07-15 09:45:08.040404] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.229 09:45:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:18:40.229 00:18:40.229 real 0m2.826s 00:18:40.229 user 0m3.870s 00:18:40.229 sys 0m0.964s 00:18:40.229 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:40.229 09:45:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.229 ************************************ 00:18:40.229 END TEST raid0_resize_test 00:18:40.229 ************************************ 00:18:40.490 09:45:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:40.490 09:45:08 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:18:40.490 09:45:08 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:40.490 09:45:08 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:18:40.490 09:45:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:40.490 09:45:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.490 09:45:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.490 ************************************ 00:18:40.490 START TEST raid_state_function_test 00:18:40.490 ************************************ 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:40.490 Process raid pid: 48668 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48668 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48668' 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48668 /var/tmp/spdk-raid.sock 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48668 ']' 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.490 09:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.490 [2024-07-15 09:45:08.368374] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:40.490 [2024-07-15 09:45:08.368561] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:41.061 EAL: TSC is not safe to use in SMP mode 00:18:41.061 EAL: TSC is not invariant 00:18:41.061 [2024-07-15 09:45:09.104109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.328 [2024-07-15 09:45:09.225136] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.328 [2024-07-15 09:45:09.227774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.328 [2024-07-15 09:45:09.228521] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.328 [2024-07-15 09:45:09.228533] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.328 09:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.328 09:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:41.328 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:41.587 [2024-07-15 09:45:09.607777] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.587 [2024-07-15 09:45:09.607854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.587 [2024-07-15 09:45:09.607859] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.587 [2024-07-15 09:45:09.607867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.587 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.848 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.848 "name": "Existed_Raid", 00:18:41.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.848 "strip_size_kb": 64, 00:18:41.848 "state": "configuring", 00:18:41.848 "raid_level": "raid0", 00:18:41.848 "superblock": false, 00:18:41.848 "num_base_bdevs": 2, 00:18:41.848 "num_base_bdevs_discovered": 0, 00:18:41.848 "num_base_bdevs_operational": 2, 00:18:41.848 "base_bdevs_list": [ 00:18:41.848 { 00:18:41.848 "name": "BaseBdev1", 00:18:41.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.848 "is_configured": false, 00:18:41.848 "data_offset": 0, 00:18:41.848 "data_size": 0 00:18:41.848 }, 00:18:41.848 { 00:18:41.848 "name": "BaseBdev2", 00:18:41.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.848 "is_configured": false, 00:18:41.848 "data_offset": 0, 00:18:41.848 "data_size": 0 00:18:41.848 } 00:18:41.848 ] 00:18:41.848 }' 00:18:41.848 09:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.848 09:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.108 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.367 [2024-07-15 09:45:10.291803] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.367 [2024-07-15 09:45:10.291834] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15b679c34500 name Existed_Raid, state configuring 00:18:42.367 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:42.626 [2024-07-15 09:45:10.511795] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.626 [2024-07-15 09:45:10.511850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.626 [2024-07-15 09:45:10.511854] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.626 [2024-07-15 09:45:10.511861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.626 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.885 [2024-07-15 09:45:10.732966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.885 BaseBdev1 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.885 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.145 [ 00:18:43.145 { 00:18:43.145 "name": "BaseBdev1", 00:18:43.145 "aliases": [ 00:18:43.145 "ed640272-428e-11ef-a0af-c98d8ee52a94" 00:18:43.145 ], 00:18:43.145 "product_name": "Malloc disk", 00:18:43.145 "block_size": 512, 00:18:43.145 "num_blocks": 65536, 00:18:43.145 "uuid": "ed640272-428e-11ef-a0af-c98d8ee52a94", 00:18:43.145 "assigned_rate_limits": { 00:18:43.145 "rw_ios_per_sec": 0, 00:18:43.145 "rw_mbytes_per_sec": 0, 00:18:43.145 "r_mbytes_per_sec": 0, 00:18:43.145 "w_mbytes_per_sec": 0 00:18:43.145 }, 00:18:43.145 "claimed": true, 00:18:43.145 "claim_type": "exclusive_write", 00:18:43.145 "zoned": false, 00:18:43.145 "supported_io_types": { 00:18:43.145 "read": true, 00:18:43.145 "write": true, 00:18:43.145 "unmap": true, 00:18:43.145 "flush": true, 00:18:43.145 "reset": true, 00:18:43.145 "nvme_admin": false, 00:18:43.145 "nvme_io": false, 00:18:43.145 "nvme_io_md": false, 00:18:43.145 "write_zeroes": true, 00:18:43.145 "zcopy": true, 00:18:43.145 "get_zone_info": false, 00:18:43.145 "zone_management": false, 00:18:43.145 "zone_append": false, 00:18:43.145 "compare": false, 00:18:43.145 "compare_and_write": false, 00:18:43.145 "abort": true, 00:18:43.145 "seek_hole": false, 00:18:43.145 "seek_data": false, 00:18:43.145 "copy": true, 00:18:43.145 "nvme_iov_md": false 00:18:43.145 }, 00:18:43.145 "memory_domains": [ 00:18:43.145 { 00:18:43.145 "dma_device_id": "system", 00:18:43.145 "dma_device_type": 1 00:18:43.145 }, 00:18:43.145 { 00:18:43.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.146 "dma_device_type": 2 00:18:43.146 } 00:18:43.146 ], 00:18:43.146 "driver_specific": {} 00:18:43.146 } 00:18:43.146 ] 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.146 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.405 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.405 "name": "Existed_Raid", 00:18:43.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.405 "strip_size_kb": 64, 00:18:43.405 "state": "configuring", 00:18:43.405 "raid_level": "raid0", 00:18:43.406 "superblock": false, 00:18:43.406 "num_base_bdevs": 2, 00:18:43.406 "num_base_bdevs_discovered": 1, 00:18:43.406 "num_base_bdevs_operational": 2, 00:18:43.406 "base_bdevs_list": [ 00:18:43.406 { 00:18:43.406 "name": "BaseBdev1", 00:18:43.406 "uuid": "ed640272-428e-11ef-a0af-c98d8ee52a94", 00:18:43.406 "is_configured": true, 00:18:43.406 "data_offset": 0, 00:18:43.406 "data_size": 65536 00:18:43.406 }, 00:18:43.406 { 00:18:43.406 "name": "BaseBdev2", 00:18:43.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.406 "is_configured": false, 00:18:43.406 "data_offset": 0, 00:18:43.406 "data_size": 0 00:18:43.406 } 00:18:43.406 ] 00:18:43.406 }' 00:18:43.406 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.406 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.975 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:43.975 [2024-07-15 09:45:12.007829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:43.975 [2024-07-15 09:45:12.007868] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15b679c34500 name Existed_Raid, state configuring 00:18:43.975 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:44.234 [2024-07-15 09:45:12.223870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.234 [2024-07-15 09:45:12.224890] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.234 [2024-07-15 09:45:12.224942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.234 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.492 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.492 "name": "Existed_Raid", 00:18:44.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.492 "strip_size_kb": 64, 00:18:44.492 "state": "configuring", 00:18:44.492 "raid_level": "raid0", 00:18:44.492 "superblock": false, 00:18:44.492 "num_base_bdevs": 2, 00:18:44.492 "num_base_bdevs_discovered": 1, 00:18:44.492 "num_base_bdevs_operational": 2, 00:18:44.492 "base_bdevs_list": [ 00:18:44.492 { 00:18:44.492 "name": "BaseBdev1", 00:18:44.492 "uuid": "ed640272-428e-11ef-a0af-c98d8ee52a94", 00:18:44.492 "is_configured": true, 00:18:44.492 "data_offset": 0, 00:18:44.492 "data_size": 65536 00:18:44.492 }, 00:18:44.492 { 00:18:44.492 "name": "BaseBdev2", 00:18:44.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.492 "is_configured": false, 00:18:44.492 "data_offset": 0, 00:18:44.492 "data_size": 0 00:18:44.492 } 00:18:44.492 ] 00:18:44.492 }' 00:18:44.492 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.492 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.750 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:45.009 [2024-07-15 09:45:12.975992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.009 [2024-07-15 09:45:12.976026] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15b679c34a00 00:18:45.009 [2024-07-15 09:45:12.976031] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:45.009 [2024-07-15 09:45:12.976050] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15b679c97e20 00:18:45.009 [2024-07-15 09:45:12.976149] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15b679c34a00 00:18:45.009 [2024-07-15 09:45:12.976152] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15b679c34a00 00:18:45.009 [2024-07-15 09:45:12.976182] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.009 BaseBdev2 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:45.009 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:45.267 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:45.530 [ 00:18:45.530 { 00:18:45.530 "name": "BaseBdev2", 00:18:45.530 "aliases": [ 00:18:45.530 "eeba6afb-428e-11ef-a0af-c98d8ee52a94" 00:18:45.530 ], 00:18:45.530 "product_name": "Malloc disk", 00:18:45.530 "block_size": 512, 00:18:45.530 "num_blocks": 65536, 00:18:45.530 "uuid": "eeba6afb-428e-11ef-a0af-c98d8ee52a94", 00:18:45.530 "assigned_rate_limits": { 00:18:45.530 "rw_ios_per_sec": 0, 00:18:45.531 "rw_mbytes_per_sec": 0, 00:18:45.531 "r_mbytes_per_sec": 0, 00:18:45.531 "w_mbytes_per_sec": 0 00:18:45.531 }, 00:18:45.531 "claimed": true, 00:18:45.531 "claim_type": "exclusive_write", 00:18:45.531 "zoned": false, 00:18:45.531 "supported_io_types": { 00:18:45.531 "read": true, 00:18:45.531 "write": true, 00:18:45.531 "unmap": true, 00:18:45.531 "flush": true, 00:18:45.531 "reset": true, 00:18:45.531 "nvme_admin": false, 00:18:45.531 "nvme_io": false, 00:18:45.531 "nvme_io_md": false, 00:18:45.531 "write_zeroes": true, 00:18:45.531 "zcopy": true, 00:18:45.531 "get_zone_info": false, 00:18:45.531 "zone_management": false, 00:18:45.531 "zone_append": false, 00:18:45.531 "compare": false, 00:18:45.531 "compare_and_write": false, 00:18:45.531 "abort": true, 00:18:45.531 "seek_hole": false, 00:18:45.531 "seek_data": false, 00:18:45.531 "copy": true, 00:18:45.531 "nvme_iov_md": false 00:18:45.531 }, 00:18:45.531 "memory_domains": [ 00:18:45.531 { 00:18:45.531 "dma_device_id": "system", 00:18:45.531 "dma_device_type": 1 00:18:45.531 }, 00:18:45.531 { 00:18:45.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.531 "dma_device_type": 2 00:18:45.531 } 00:18:45.531 ], 00:18:45.531 "driver_specific": {} 00:18:45.531 } 00:18:45.531 ] 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.531 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.789 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.789 "name": "Existed_Raid", 00:18:45.789 "uuid": "eeba7266-428e-11ef-a0af-c98d8ee52a94", 00:18:45.790 "strip_size_kb": 64, 00:18:45.790 "state": "online", 00:18:45.790 "raid_level": "raid0", 00:18:45.790 "superblock": false, 00:18:45.790 "num_base_bdevs": 2, 00:18:45.790 "num_base_bdevs_discovered": 2, 00:18:45.790 "num_base_bdevs_operational": 2, 00:18:45.790 "base_bdevs_list": [ 00:18:45.790 { 00:18:45.790 "name": "BaseBdev1", 00:18:45.790 "uuid": "ed640272-428e-11ef-a0af-c98d8ee52a94", 00:18:45.790 "is_configured": true, 00:18:45.790 "data_offset": 0, 00:18:45.790 "data_size": 65536 00:18:45.790 }, 00:18:45.790 { 00:18:45.790 "name": "BaseBdev2", 00:18:45.790 "uuid": "eeba6afb-428e-11ef-a0af-c98d8ee52a94", 00:18:45.790 "is_configured": true, 00:18:45.790 "data_offset": 0, 00:18:45.790 "data_size": 65536 00:18:45.790 } 00:18:45.790 ] 00:18:45.790 }' 00:18:45.790 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.790 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:46.055 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:46.055 [2024-07-15 09:45:14.127894] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:46.316 "name": "Existed_Raid", 00:18:46.316 "aliases": [ 00:18:46.316 "eeba7266-428e-11ef-a0af-c98d8ee52a94" 00:18:46.316 ], 00:18:46.316 "product_name": "Raid Volume", 00:18:46.316 "block_size": 512, 00:18:46.316 "num_blocks": 131072, 00:18:46.316 "uuid": "eeba7266-428e-11ef-a0af-c98d8ee52a94", 00:18:46.316 "assigned_rate_limits": { 00:18:46.316 "rw_ios_per_sec": 0, 00:18:46.316 "rw_mbytes_per_sec": 0, 00:18:46.316 "r_mbytes_per_sec": 0, 00:18:46.316 "w_mbytes_per_sec": 0 00:18:46.316 }, 00:18:46.316 "claimed": false, 00:18:46.316 "zoned": false, 00:18:46.316 "supported_io_types": { 00:18:46.316 "read": true, 00:18:46.316 "write": true, 00:18:46.316 "unmap": true, 00:18:46.316 "flush": true, 00:18:46.316 "reset": true, 00:18:46.316 "nvme_admin": false, 00:18:46.316 "nvme_io": false, 00:18:46.316 "nvme_io_md": false, 00:18:46.316 "write_zeroes": true, 00:18:46.316 "zcopy": false, 00:18:46.316 "get_zone_info": false, 00:18:46.316 "zone_management": false, 00:18:46.316 "zone_append": false, 00:18:46.316 "compare": false, 00:18:46.316 "compare_and_write": false, 00:18:46.316 "abort": false, 00:18:46.316 "seek_hole": false, 00:18:46.316 "seek_data": false, 00:18:46.316 "copy": false, 00:18:46.316 "nvme_iov_md": false 00:18:46.316 }, 00:18:46.316 "memory_domains": [ 00:18:46.316 { 00:18:46.316 "dma_device_id": "system", 00:18:46.316 "dma_device_type": 1 00:18:46.316 }, 00:18:46.316 { 00:18:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.316 "dma_device_type": 2 00:18:46.316 }, 00:18:46.316 { 00:18:46.316 "dma_device_id": "system", 00:18:46.316 "dma_device_type": 1 00:18:46.316 }, 00:18:46.316 { 00:18:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.316 "dma_device_type": 2 00:18:46.316 } 00:18:46.316 ], 00:18:46.316 "driver_specific": { 00:18:46.316 "raid": { 00:18:46.316 "uuid": "eeba7266-428e-11ef-a0af-c98d8ee52a94", 00:18:46.316 "strip_size_kb": 64, 00:18:46.316 "state": "online", 00:18:46.316 "raid_level": "raid0", 00:18:46.316 "superblock": false, 00:18:46.316 "num_base_bdevs": 2, 00:18:46.316 "num_base_bdevs_discovered": 2, 00:18:46.316 "num_base_bdevs_operational": 2, 00:18:46.316 "base_bdevs_list": [ 00:18:46.316 { 00:18:46.316 "name": "BaseBdev1", 00:18:46.316 "uuid": "ed640272-428e-11ef-a0af-c98d8ee52a94", 00:18:46.316 "is_configured": true, 00:18:46.316 "data_offset": 0, 00:18:46.316 "data_size": 65536 00:18:46.316 }, 00:18:46.316 { 00:18:46.316 "name": "BaseBdev2", 00:18:46.316 "uuid": "eeba6afb-428e-11ef-a0af-c98d8ee52a94", 00:18:46.316 "is_configured": true, 00:18:46.316 "data_offset": 0, 00:18:46.316 "data_size": 65536 00:18:46.316 } 00:18:46.316 ] 00:18:46.316 } 00:18:46.316 } 00:18:46.316 }' 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:46.316 BaseBdev2' 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:46.316 "name": "BaseBdev1", 00:18:46.316 "aliases": [ 00:18:46.316 "ed640272-428e-11ef-a0af-c98d8ee52a94" 00:18:46.316 ], 00:18:46.316 "product_name": "Malloc disk", 00:18:46.316 "block_size": 512, 00:18:46.316 "num_blocks": 65536, 00:18:46.316 "uuid": "ed640272-428e-11ef-a0af-c98d8ee52a94", 00:18:46.316 "assigned_rate_limits": { 00:18:46.316 "rw_ios_per_sec": 0, 00:18:46.316 "rw_mbytes_per_sec": 0, 00:18:46.316 "r_mbytes_per_sec": 0, 00:18:46.316 "w_mbytes_per_sec": 0 00:18:46.316 }, 00:18:46.316 "claimed": true, 00:18:46.316 "claim_type": "exclusive_write", 00:18:46.316 "zoned": false, 00:18:46.316 "supported_io_types": { 00:18:46.316 "read": true, 00:18:46.316 "write": true, 00:18:46.316 "unmap": true, 00:18:46.316 "flush": true, 00:18:46.316 "reset": true, 00:18:46.316 "nvme_admin": false, 00:18:46.316 "nvme_io": false, 00:18:46.316 "nvme_io_md": false, 00:18:46.316 "write_zeroes": true, 00:18:46.316 "zcopy": true, 00:18:46.316 "get_zone_info": false, 00:18:46.316 "zone_management": false, 00:18:46.316 "zone_append": false, 00:18:46.316 "compare": false, 00:18:46.316 "compare_and_write": false, 00:18:46.316 "abort": true, 00:18:46.316 "seek_hole": false, 00:18:46.316 "seek_data": false, 00:18:46.316 "copy": true, 00:18:46.316 "nvme_iov_md": false 00:18:46.316 }, 00:18:46.316 "memory_domains": [ 00:18:46.316 { 00:18:46.316 "dma_device_id": "system", 00:18:46.316 "dma_device_type": 1 00:18:46.316 }, 00:18:46.316 { 00:18:46.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.316 "dma_device_type": 2 00:18:46.316 } 00:18:46.316 ], 00:18:46.316 "driver_specific": {} 00:18:46.316 }' 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.316 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:46.577 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:46.577 "name": "BaseBdev2", 00:18:46.577 "aliases": [ 00:18:46.577 "eeba6afb-428e-11ef-a0af-c98d8ee52a94" 00:18:46.577 ], 00:18:46.577 "product_name": "Malloc disk", 00:18:46.577 "block_size": 512, 00:18:46.577 "num_blocks": 65536, 00:18:46.577 "uuid": "eeba6afb-428e-11ef-a0af-c98d8ee52a94", 00:18:46.577 "assigned_rate_limits": { 00:18:46.577 "rw_ios_per_sec": 0, 00:18:46.577 "rw_mbytes_per_sec": 0, 00:18:46.577 "r_mbytes_per_sec": 0, 00:18:46.577 "w_mbytes_per_sec": 0 00:18:46.578 }, 00:18:46.578 "claimed": true, 00:18:46.578 "claim_type": "exclusive_write", 00:18:46.578 "zoned": false, 00:18:46.578 "supported_io_types": { 00:18:46.578 "read": true, 00:18:46.578 "write": true, 00:18:46.578 "unmap": true, 00:18:46.578 "flush": true, 00:18:46.578 "reset": true, 00:18:46.578 "nvme_admin": false, 00:18:46.578 "nvme_io": false, 00:18:46.578 "nvme_io_md": false, 00:18:46.578 "write_zeroes": true, 00:18:46.578 "zcopy": true, 00:18:46.578 "get_zone_info": false, 00:18:46.578 "zone_management": false, 00:18:46.578 "zone_append": false, 00:18:46.578 "compare": false, 00:18:46.578 "compare_and_write": false, 00:18:46.578 "abort": true, 00:18:46.578 "seek_hole": false, 00:18:46.578 "seek_data": false, 00:18:46.578 "copy": true, 00:18:46.578 "nvme_iov_md": false 00:18:46.578 }, 00:18:46.578 "memory_domains": [ 00:18:46.578 { 00:18:46.578 "dma_device_id": "system", 00:18:46.578 "dma_device_type": 1 00:18:46.578 }, 00:18:46.578 { 00:18:46.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.578 "dma_device_type": 2 00:18:46.578 } 00:18:46.578 ], 00:18:46.578 "driver_specific": {} 00:18:46.578 }' 00:18:46.578 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.838 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:46.838 [2024-07-15 09:45:14.915880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.838 [2024-07-15 09:45:14.915908] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.838 [2024-07-15 09:45:14.915922] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.098 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.098 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.098 "name": "Existed_Raid", 00:18:47.098 "uuid": "eeba7266-428e-11ef-a0af-c98d8ee52a94", 00:18:47.098 "strip_size_kb": 64, 00:18:47.098 "state": "offline", 00:18:47.098 "raid_level": "raid0", 00:18:47.098 "superblock": false, 00:18:47.098 "num_base_bdevs": 2, 00:18:47.098 "num_base_bdevs_discovered": 1, 00:18:47.098 "num_base_bdevs_operational": 1, 00:18:47.098 "base_bdevs_list": [ 00:18:47.098 { 00:18:47.098 "name": null, 00:18:47.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.098 "is_configured": false, 00:18:47.098 "data_offset": 0, 00:18:47.098 "data_size": 65536 00:18:47.098 }, 00:18:47.098 { 00:18:47.098 "name": "BaseBdev2", 00:18:47.098 "uuid": "eeba6afb-428e-11ef-a0af-c98d8ee52a94", 00:18:47.098 "is_configured": true, 00:18:47.098 "data_offset": 0, 00:18:47.098 "data_size": 65536 00:18:47.098 } 00:18:47.098 ] 00:18:47.098 }' 00:18:47.098 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.098 09:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:47.665 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:47.923 [2024-07-15 09:45:15.880894] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:47.923 [2024-07-15 09:45:15.880927] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15b679c34a00 name Existed_Raid, state offline 00:18:47.923 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:47.923 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:47.923 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.923 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48668 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48668 ']' 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48668 00:18:48.181 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48668 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:48.182 killing process with pid 48668 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48668' 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48668 00:18:48.182 [2024-07-15 09:45:16.142958] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.182 [2024-07-15 09:45:16.142992] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.182 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48668 00:18:48.452 09:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:48.452 00:18:48.452 real 0m8.045s 00:18:48.452 user 0m13.409s 00:18:48.452 sys 0m1.886s 00:18:48.452 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:48.452 ************************************ 00:18:48.452 09:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.452 END TEST raid_state_function_test 00:18:48.452 ************************************ 00:18:48.452 09:45:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:48.452 09:45:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:18:48.452 09:45:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:48.452 09:45:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.452 09:45:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.452 ************************************ 00:18:48.452 START TEST raid_state_function_test_sb 00:18:48.452 ************************************ 00:18:48.452 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:18:48.452 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:48.452 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48939 00:18:48.453 Process raid pid: 48939 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48939' 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48939 /var/tmp/spdk-raid.sock 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48939 ']' 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.453 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.453 [2024-07-15 09:45:16.475463] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:48.453 [2024-07-15 09:45:16.475733] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:49.410 EAL: TSC is not safe to use in SMP mode 00:18:49.410 EAL: TSC is not invariant 00:18:49.410 [2024-07-15 09:45:17.216489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.410 [2024-07-15 09:45:17.332788] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:49.410 [2024-07-15 09:45:17.335365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.410 [2024-07-15 09:45:17.336116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.410 [2024-07-15 09:45:17.336128] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.410 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.410 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:49.410 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:49.667 [2024-07-15 09:45:17.615458] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:49.667 [2024-07-15 09:45:17.615533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:49.667 [2024-07-15 09:45:17.615538] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.667 [2024-07-15 09:45:17.615545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.667 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.668 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.668 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.668 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.668 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.925 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.925 "name": "Existed_Raid", 00:18:49.925 "uuid": "f17e5d79-428e-11ef-a0af-c98d8ee52a94", 00:18:49.925 "strip_size_kb": 64, 00:18:49.925 "state": "configuring", 00:18:49.925 "raid_level": "raid0", 00:18:49.925 "superblock": true, 00:18:49.925 "num_base_bdevs": 2, 00:18:49.925 "num_base_bdevs_discovered": 0, 00:18:49.925 "num_base_bdevs_operational": 2, 00:18:49.925 "base_bdevs_list": [ 00:18:49.925 { 00:18:49.925 "name": "BaseBdev1", 00:18:49.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.925 "is_configured": false, 00:18:49.925 "data_offset": 0, 00:18:49.925 "data_size": 0 00:18:49.925 }, 00:18:49.925 { 00:18:49.925 "name": "BaseBdev2", 00:18:49.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.925 "is_configured": false, 00:18:49.925 "data_offset": 0, 00:18:49.925 "data_size": 0 00:18:49.925 } 00:18:49.925 ] 00:18:49.925 }' 00:18:49.925 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.925 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.183 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:50.441 [2024-07-15 09:45:18.339454] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.441 [2024-07-15 09:45:18.339487] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x230b9e434500 name Existed_Raid, state configuring 00:18:50.441 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:50.699 [2024-07-15 09:45:18.627468] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.699 [2024-07-15 09:45:18.627523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.699 [2024-07-15 09:45:18.627527] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.699 [2024-07-15 09:45:18.627534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.699 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:50.957 [2024-07-15 09:45:18.828628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.957 BaseBdev1 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:50.957 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.214 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:51.214 [ 00:18:51.214 { 00:18:51.214 "name": "BaseBdev1", 00:18:51.214 "aliases": [ 00:18:51.214 "f2374e5d-428e-11ef-a0af-c98d8ee52a94" 00:18:51.214 ], 00:18:51.214 "product_name": "Malloc disk", 00:18:51.214 "block_size": 512, 00:18:51.214 "num_blocks": 65536, 00:18:51.214 "uuid": "f2374e5d-428e-11ef-a0af-c98d8ee52a94", 00:18:51.214 "assigned_rate_limits": { 00:18:51.214 "rw_ios_per_sec": 0, 00:18:51.214 "rw_mbytes_per_sec": 0, 00:18:51.214 "r_mbytes_per_sec": 0, 00:18:51.214 "w_mbytes_per_sec": 0 00:18:51.214 }, 00:18:51.214 "claimed": true, 00:18:51.214 "claim_type": "exclusive_write", 00:18:51.214 "zoned": false, 00:18:51.214 "supported_io_types": { 00:18:51.214 "read": true, 00:18:51.215 "write": true, 00:18:51.215 "unmap": true, 00:18:51.215 "flush": true, 00:18:51.215 "reset": true, 00:18:51.215 "nvme_admin": false, 00:18:51.215 "nvme_io": false, 00:18:51.215 "nvme_io_md": false, 00:18:51.215 "write_zeroes": true, 00:18:51.215 "zcopy": true, 00:18:51.215 "get_zone_info": false, 00:18:51.215 "zone_management": false, 00:18:51.215 "zone_append": false, 00:18:51.215 "compare": false, 00:18:51.215 "compare_and_write": false, 00:18:51.215 "abort": true, 00:18:51.215 "seek_hole": false, 00:18:51.215 "seek_data": false, 00:18:51.215 "copy": true, 00:18:51.215 "nvme_iov_md": false 00:18:51.215 }, 00:18:51.215 "memory_domains": [ 00:18:51.215 { 00:18:51.215 "dma_device_id": "system", 00:18:51.215 "dma_device_type": 1 00:18:51.215 }, 00:18:51.215 { 00:18:51.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.215 "dma_device_type": 2 00:18:51.215 } 00:18:51.215 ], 00:18:51.215 "driver_specific": {} 00:18:51.215 } 00:18:51.215 ] 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.215 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.473 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.473 "name": "Existed_Raid", 00:18:51.473 "uuid": "f218c92d-428e-11ef-a0af-c98d8ee52a94", 00:18:51.473 "strip_size_kb": 64, 00:18:51.473 "state": "configuring", 00:18:51.473 "raid_level": "raid0", 00:18:51.473 "superblock": true, 00:18:51.473 "num_base_bdevs": 2, 00:18:51.473 "num_base_bdevs_discovered": 1, 00:18:51.473 "num_base_bdevs_operational": 2, 00:18:51.473 "base_bdevs_list": [ 00:18:51.473 { 00:18:51.473 "name": "BaseBdev1", 00:18:51.473 "uuid": "f2374e5d-428e-11ef-a0af-c98d8ee52a94", 00:18:51.473 "is_configured": true, 00:18:51.473 "data_offset": 2048, 00:18:51.473 "data_size": 63488 00:18:51.473 }, 00:18:51.473 { 00:18:51.473 "name": "BaseBdev2", 00:18:51.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.473 "is_configured": false, 00:18:51.473 "data_offset": 0, 00:18:51.473 "data_size": 0 00:18:51.473 } 00:18:51.473 ] 00:18:51.473 }' 00:18:51.473 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.473 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.731 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:51.989 [2024-07-15 09:45:20.027485] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.989 [2024-07-15 09:45:20.027528] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x230b9e434500 name Existed_Raid, state configuring 00:18:51.989 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:52.247 [2024-07-15 09:45:20.239500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.247 [2024-07-15 09:45:20.240464] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:52.247 [2024-07-15 09:45:20.240511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.247 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.505 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.505 "name": "Existed_Raid", 00:18:52.505 "uuid": "f30ec312-428e-11ef-a0af-c98d8ee52a94", 00:18:52.505 "strip_size_kb": 64, 00:18:52.506 "state": "configuring", 00:18:52.506 "raid_level": "raid0", 00:18:52.506 "superblock": true, 00:18:52.506 "num_base_bdevs": 2, 00:18:52.506 "num_base_bdevs_discovered": 1, 00:18:52.506 "num_base_bdevs_operational": 2, 00:18:52.506 "base_bdevs_list": [ 00:18:52.506 { 00:18:52.506 "name": "BaseBdev1", 00:18:52.506 "uuid": "f2374e5d-428e-11ef-a0af-c98d8ee52a94", 00:18:52.506 "is_configured": true, 00:18:52.506 "data_offset": 2048, 00:18:52.506 "data_size": 63488 00:18:52.506 }, 00:18:52.506 { 00:18:52.506 "name": "BaseBdev2", 00:18:52.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.506 "is_configured": false, 00:18:52.506 "data_offset": 0, 00:18:52.506 "data_size": 0 00:18:52.506 } 00:18:52.506 ] 00:18:52.506 }' 00:18:52.506 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.506 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.764 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:53.043 [2024-07-15 09:45:20.935627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.043 [2024-07-15 09:45:20.935710] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x230b9e434a00 00:18:53.043 [2024-07-15 09:45:20.935716] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:53.043 [2024-07-15 09:45:20.935734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x230b9e497e20 00:18:53.043 [2024-07-15 09:45:20.935769] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x230b9e434a00 00:18:53.043 [2024-07-15 09:45:20.935772] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x230b9e434a00 00:18:53.043 [2024-07-15 09:45:20.935789] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.043 BaseBdev2 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:53.043 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:53.301 [ 00:18:53.301 { 00:18:53.301 "name": "BaseBdev2", 00:18:53.301 "aliases": [ 00:18:53.301 "f378f6f2-428e-11ef-a0af-c98d8ee52a94" 00:18:53.301 ], 00:18:53.301 "product_name": "Malloc disk", 00:18:53.301 "block_size": 512, 00:18:53.301 "num_blocks": 65536, 00:18:53.301 "uuid": "f378f6f2-428e-11ef-a0af-c98d8ee52a94", 00:18:53.301 "assigned_rate_limits": { 00:18:53.301 "rw_ios_per_sec": 0, 00:18:53.301 "rw_mbytes_per_sec": 0, 00:18:53.301 "r_mbytes_per_sec": 0, 00:18:53.301 "w_mbytes_per_sec": 0 00:18:53.301 }, 00:18:53.301 "claimed": true, 00:18:53.301 "claim_type": "exclusive_write", 00:18:53.301 "zoned": false, 00:18:53.301 "supported_io_types": { 00:18:53.301 "read": true, 00:18:53.301 "write": true, 00:18:53.301 "unmap": true, 00:18:53.301 "flush": true, 00:18:53.301 "reset": true, 00:18:53.301 "nvme_admin": false, 00:18:53.301 "nvme_io": false, 00:18:53.301 "nvme_io_md": false, 00:18:53.301 "write_zeroes": true, 00:18:53.301 "zcopy": true, 00:18:53.301 "get_zone_info": false, 00:18:53.301 "zone_management": false, 00:18:53.301 "zone_append": false, 00:18:53.301 "compare": false, 00:18:53.301 "compare_and_write": false, 00:18:53.301 "abort": true, 00:18:53.301 "seek_hole": false, 00:18:53.301 "seek_data": false, 00:18:53.301 "copy": true, 00:18:53.301 "nvme_iov_md": false 00:18:53.301 }, 00:18:53.301 "memory_domains": [ 00:18:53.301 { 00:18:53.301 "dma_device_id": "system", 00:18:53.301 "dma_device_type": 1 00:18:53.301 }, 00:18:53.301 { 00:18:53.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.301 "dma_device_type": 2 00:18:53.301 } 00:18:53.301 ], 00:18:53.301 "driver_specific": {} 00:18:53.301 } 00:18:53.301 ] 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.301 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.558 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.558 "name": "Existed_Raid", 00:18:53.558 "uuid": "f30ec312-428e-11ef-a0af-c98d8ee52a94", 00:18:53.558 "strip_size_kb": 64, 00:18:53.558 "state": "online", 00:18:53.558 "raid_level": "raid0", 00:18:53.558 "superblock": true, 00:18:53.558 "num_base_bdevs": 2, 00:18:53.558 "num_base_bdevs_discovered": 2, 00:18:53.558 "num_base_bdevs_operational": 2, 00:18:53.558 "base_bdevs_list": [ 00:18:53.558 { 00:18:53.558 "name": "BaseBdev1", 00:18:53.558 "uuid": "f2374e5d-428e-11ef-a0af-c98d8ee52a94", 00:18:53.558 "is_configured": true, 00:18:53.558 "data_offset": 2048, 00:18:53.558 "data_size": 63488 00:18:53.558 }, 00:18:53.558 { 00:18:53.558 "name": "BaseBdev2", 00:18:53.558 "uuid": "f378f6f2-428e-11ef-a0af-c98d8ee52a94", 00:18:53.558 "is_configured": true, 00:18:53.558 "data_offset": 2048, 00:18:53.558 "data_size": 63488 00:18:53.558 } 00:18:53.558 ] 00:18:53.558 }' 00:18:53.558 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.558 09:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:53.815 09:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:54.072 [2024-07-15 09:45:22.019980] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.072 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:54.072 "name": "Existed_Raid", 00:18:54.072 "aliases": [ 00:18:54.072 "f30ec312-428e-11ef-a0af-c98d8ee52a94" 00:18:54.072 ], 00:18:54.072 "product_name": "Raid Volume", 00:18:54.072 "block_size": 512, 00:18:54.072 "num_blocks": 126976, 00:18:54.072 "uuid": "f30ec312-428e-11ef-a0af-c98d8ee52a94", 00:18:54.072 "assigned_rate_limits": { 00:18:54.072 "rw_ios_per_sec": 0, 00:18:54.072 "rw_mbytes_per_sec": 0, 00:18:54.072 "r_mbytes_per_sec": 0, 00:18:54.072 "w_mbytes_per_sec": 0 00:18:54.072 }, 00:18:54.072 "claimed": false, 00:18:54.072 "zoned": false, 00:18:54.072 "supported_io_types": { 00:18:54.072 "read": true, 00:18:54.072 "write": true, 00:18:54.072 "unmap": true, 00:18:54.072 "flush": true, 00:18:54.072 "reset": true, 00:18:54.072 "nvme_admin": false, 00:18:54.072 "nvme_io": false, 00:18:54.072 "nvme_io_md": false, 00:18:54.072 "write_zeroes": true, 00:18:54.072 "zcopy": false, 00:18:54.072 "get_zone_info": false, 00:18:54.072 "zone_management": false, 00:18:54.072 "zone_append": false, 00:18:54.072 "compare": false, 00:18:54.072 "compare_and_write": false, 00:18:54.072 "abort": false, 00:18:54.072 "seek_hole": false, 00:18:54.072 "seek_data": false, 00:18:54.072 "copy": false, 00:18:54.073 "nvme_iov_md": false 00:18:54.073 }, 00:18:54.073 "memory_domains": [ 00:18:54.073 { 00:18:54.073 "dma_device_id": "system", 00:18:54.073 "dma_device_type": 1 00:18:54.073 }, 00:18:54.073 { 00:18:54.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.073 "dma_device_type": 2 00:18:54.073 }, 00:18:54.073 { 00:18:54.073 "dma_device_id": "system", 00:18:54.073 "dma_device_type": 1 00:18:54.073 }, 00:18:54.073 { 00:18:54.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.073 "dma_device_type": 2 00:18:54.073 } 00:18:54.073 ], 00:18:54.073 "driver_specific": { 00:18:54.073 "raid": { 00:18:54.073 "uuid": "f30ec312-428e-11ef-a0af-c98d8ee52a94", 00:18:54.073 "strip_size_kb": 64, 00:18:54.073 "state": "online", 00:18:54.073 "raid_level": "raid0", 00:18:54.073 "superblock": true, 00:18:54.073 "num_base_bdevs": 2, 00:18:54.073 "num_base_bdevs_discovered": 2, 00:18:54.073 "num_base_bdevs_operational": 2, 00:18:54.073 "base_bdevs_list": [ 00:18:54.073 { 00:18:54.073 "name": "BaseBdev1", 00:18:54.073 "uuid": "f2374e5d-428e-11ef-a0af-c98d8ee52a94", 00:18:54.073 "is_configured": true, 00:18:54.073 "data_offset": 2048, 00:18:54.073 "data_size": 63488 00:18:54.073 }, 00:18:54.073 { 00:18:54.073 "name": "BaseBdev2", 00:18:54.073 "uuid": "f378f6f2-428e-11ef-a0af-c98d8ee52a94", 00:18:54.073 "is_configured": true, 00:18:54.073 "data_offset": 2048, 00:18:54.073 "data_size": 63488 00:18:54.073 } 00:18:54.073 ] 00:18:54.073 } 00:18:54.073 } 00:18:54.073 }' 00:18:54.073 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.073 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:54.073 BaseBdev2' 00:18:54.073 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.073 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:54.073 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.330 "name": "BaseBdev1", 00:18:54.330 "aliases": [ 00:18:54.330 "f2374e5d-428e-11ef-a0af-c98d8ee52a94" 00:18:54.330 ], 00:18:54.330 "product_name": "Malloc disk", 00:18:54.330 "block_size": 512, 00:18:54.330 "num_blocks": 65536, 00:18:54.330 "uuid": "f2374e5d-428e-11ef-a0af-c98d8ee52a94", 00:18:54.330 "assigned_rate_limits": { 00:18:54.330 "rw_ios_per_sec": 0, 00:18:54.330 "rw_mbytes_per_sec": 0, 00:18:54.330 "r_mbytes_per_sec": 0, 00:18:54.330 "w_mbytes_per_sec": 0 00:18:54.330 }, 00:18:54.330 "claimed": true, 00:18:54.330 "claim_type": "exclusive_write", 00:18:54.330 "zoned": false, 00:18:54.330 "supported_io_types": { 00:18:54.330 "read": true, 00:18:54.330 "write": true, 00:18:54.330 "unmap": true, 00:18:54.330 "flush": true, 00:18:54.330 "reset": true, 00:18:54.330 "nvme_admin": false, 00:18:54.330 "nvme_io": false, 00:18:54.330 "nvme_io_md": false, 00:18:54.330 "write_zeroes": true, 00:18:54.330 "zcopy": true, 00:18:54.330 "get_zone_info": false, 00:18:54.330 "zone_management": false, 00:18:54.330 "zone_append": false, 00:18:54.330 "compare": false, 00:18:54.330 "compare_and_write": false, 00:18:54.330 "abort": true, 00:18:54.330 "seek_hole": false, 00:18:54.330 "seek_data": false, 00:18:54.330 "copy": true, 00:18:54.330 "nvme_iov_md": false 00:18:54.330 }, 00:18:54.330 "memory_domains": [ 00:18:54.330 { 00:18:54.330 "dma_device_id": "system", 00:18:54.330 "dma_device_type": 1 00:18:54.330 }, 00:18:54.330 { 00:18:54.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.330 "dma_device_type": 2 00:18:54.330 } 00:18:54.330 ], 00:18:54.330 "driver_specific": {} 00:18:54.330 }' 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:54.330 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.588 "name": "BaseBdev2", 00:18:54.588 "aliases": [ 00:18:54.588 "f378f6f2-428e-11ef-a0af-c98d8ee52a94" 00:18:54.588 ], 00:18:54.588 "product_name": "Malloc disk", 00:18:54.588 "block_size": 512, 00:18:54.588 "num_blocks": 65536, 00:18:54.588 "uuid": "f378f6f2-428e-11ef-a0af-c98d8ee52a94", 00:18:54.588 "assigned_rate_limits": { 00:18:54.588 "rw_ios_per_sec": 0, 00:18:54.588 "rw_mbytes_per_sec": 0, 00:18:54.588 "r_mbytes_per_sec": 0, 00:18:54.588 "w_mbytes_per_sec": 0 00:18:54.588 }, 00:18:54.588 "claimed": true, 00:18:54.588 "claim_type": "exclusive_write", 00:18:54.588 "zoned": false, 00:18:54.588 "supported_io_types": { 00:18:54.588 "read": true, 00:18:54.588 "write": true, 00:18:54.588 "unmap": true, 00:18:54.588 "flush": true, 00:18:54.588 "reset": true, 00:18:54.588 "nvme_admin": false, 00:18:54.588 "nvme_io": false, 00:18:54.588 "nvme_io_md": false, 00:18:54.588 "write_zeroes": true, 00:18:54.588 "zcopy": true, 00:18:54.588 "get_zone_info": false, 00:18:54.588 "zone_management": false, 00:18:54.588 "zone_append": false, 00:18:54.588 "compare": false, 00:18:54.588 "compare_and_write": false, 00:18:54.588 "abort": true, 00:18:54.588 "seek_hole": false, 00:18:54.588 "seek_data": false, 00:18:54.588 "copy": true, 00:18:54.588 "nvme_iov_md": false 00:18:54.588 }, 00:18:54.588 "memory_domains": [ 00:18:54.588 { 00:18:54.588 "dma_device_id": "system", 00:18:54.588 "dma_device_type": 1 00:18:54.588 }, 00:18:54.588 { 00:18:54.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.588 "dma_device_type": 2 00:18:54.588 } 00:18:54.588 ], 00:18:54.588 "driver_specific": {} 00:18:54.588 }' 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:54.588 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:54.845 [2024-07-15 09:45:22.788302] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.845 [2024-07-15 09:45:22.788331] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.845 [2024-07-15 09:45:22.788342] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.845 09:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.103 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.103 "name": "Existed_Raid", 00:18:55.103 "uuid": "f30ec312-428e-11ef-a0af-c98d8ee52a94", 00:18:55.103 "strip_size_kb": 64, 00:18:55.103 "state": "offline", 00:18:55.103 "raid_level": "raid0", 00:18:55.103 "superblock": true, 00:18:55.103 "num_base_bdevs": 2, 00:18:55.103 "num_base_bdevs_discovered": 1, 00:18:55.103 "num_base_bdevs_operational": 1, 00:18:55.103 "base_bdevs_list": [ 00:18:55.103 { 00:18:55.103 "name": null, 00:18:55.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.103 "is_configured": false, 00:18:55.103 "data_offset": 2048, 00:18:55.103 "data_size": 63488 00:18:55.103 }, 00:18:55.103 { 00:18:55.103 "name": "BaseBdev2", 00:18:55.103 "uuid": "f378f6f2-428e-11ef-a0af-c98d8ee52a94", 00:18:55.103 "is_configured": true, 00:18:55.103 "data_offset": 2048, 00:18:55.103 "data_size": 63488 00:18:55.103 } 00:18:55.103 ] 00:18:55.103 }' 00:18:55.103 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.103 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.360 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:55.360 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:55.360 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.360 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:55.616 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:55.616 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.616 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:55.616 [2024-07-15 09:45:23.681347] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:55.616 [2024-07-15 09:45:23.681380] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x230b9e434a00 name Existed_Raid, state offline 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48939 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48939 ']' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48939 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48939 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:55.873 killing process with pid 48939 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48939' 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48939 00:18:55.873 [2024-07-15 09:45:23.898922] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.873 09:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48939 00:18:55.873 [2024-07-15 09:45:23.898952] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.130 09:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:56.130 00:18:56.130 real 0m7.698s 00:18:56.130 user 0m12.866s 00:18:56.130 sys 0m1.728s 00:18:56.130 09:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.130 ************************************ 00:18:56.130 END TEST raid_state_function_test_sb 00:18:56.130 ************************************ 00:18:56.130 09:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.130 09:45:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:56.130 09:45:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:18:56.130 09:45:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:56.130 09:45:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.130 09:45:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.130 ************************************ 00:18:56.130 START TEST raid_superblock_test 00:18:56.130 ************************************ 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:56.130 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:56.388 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:56.388 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:56.388 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:56.388 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49205 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49205 /var/tmp/spdk-raid.sock 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49205 ']' 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.389 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.389 [2024-07-15 09:45:24.231933] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:18:56.389 [2024-07-15 09:45:24.232264] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:56.955 EAL: TSC is not safe to use in SMP mode 00:18:56.955 EAL: TSC is not invariant 00:18:56.955 [2024-07-15 09:45:24.960800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.214 [2024-07-15 09:45:25.077486] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:57.214 [2024-07-15 09:45:25.079977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.214 [2024-07-15 09:45:25.080721] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.214 [2024-07-15 09:45:25.080732] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.214 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:57.473 malloc1 00:18:57.473 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:57.473 [2024-07-15 09:45:25.555936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:57.473 [2024-07-15 09:45:25.556000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.473 [2024-07-15 09:45:25.556010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1ad5034780 00:18:57.473 [2024-07-15 09:45:25.556017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.473 [2024-07-15 09:45:25.557073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.473 [2024-07-15 09:45:25.557097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:57.738 pt1 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:57.738 malloc2 00:18:57.738 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.008 [2024-07-15 09:45:25.960091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.008 [2024-07-15 09:45:25.960149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.008 [2024-07-15 09:45:25.960158] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1ad5034c80 00:18:58.008 [2024-07-15 09:45:25.960165] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.008 [2024-07-15 09:45:25.960841] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.008 [2024-07-15 09:45:25.960868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.008 pt2 00:18:58.008 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:58.008 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:58.008 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:58.267 [2024-07-15 09:45:26.140188] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.268 [2024-07-15 09:45:26.140823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.268 [2024-07-15 09:45:26.140880] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe1ad5034f00 00:18:58.268 [2024-07-15 09:45:26.140885] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:58.268 [2024-07-15 09:45:26.140917] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe1ad5097e20 00:18:58.268 [2024-07-15 09:45:26.140994] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe1ad5034f00 00:18:58.268 [2024-07-15 09:45:26.140997] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe1ad5034f00 00:18:58.268 [2024-07-15 09:45:26.141022] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.268 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.527 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.527 "name": "raid_bdev1", 00:18:58.527 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:18:58.527 "strip_size_kb": 64, 00:18:58.527 "state": "online", 00:18:58.527 "raid_level": "raid0", 00:18:58.527 "superblock": true, 00:18:58.527 "num_base_bdevs": 2, 00:18:58.527 "num_base_bdevs_discovered": 2, 00:18:58.527 "num_base_bdevs_operational": 2, 00:18:58.527 "base_bdevs_list": [ 00:18:58.527 { 00:18:58.527 "name": "pt1", 00:18:58.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.527 "is_configured": true, 00:18:58.527 "data_offset": 2048, 00:18:58.527 "data_size": 63488 00:18:58.527 }, 00:18:58.527 { 00:18:58.527 "name": "pt2", 00:18:58.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.527 "is_configured": true, 00:18:58.527 "data_offset": 2048, 00:18:58.527 "data_size": 63488 00:18:58.527 } 00:18:58.527 ] 00:18:58.527 }' 00:18:58.527 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.527 09:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.785 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:58.785 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:58.785 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:58.785 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:58.785 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:58.785 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:58.786 [2024-07-15 09:45:26.820466] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:58.786 "name": "raid_bdev1", 00:18:58.786 "aliases": [ 00:18:58.786 "f69322d1-428e-11ef-a0af-c98d8ee52a94" 00:18:58.786 ], 00:18:58.786 "product_name": "Raid Volume", 00:18:58.786 "block_size": 512, 00:18:58.786 "num_blocks": 126976, 00:18:58.786 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:18:58.786 "assigned_rate_limits": { 00:18:58.786 "rw_ios_per_sec": 0, 00:18:58.786 "rw_mbytes_per_sec": 0, 00:18:58.786 "r_mbytes_per_sec": 0, 00:18:58.786 "w_mbytes_per_sec": 0 00:18:58.786 }, 00:18:58.786 "claimed": false, 00:18:58.786 "zoned": false, 00:18:58.786 "supported_io_types": { 00:18:58.786 "read": true, 00:18:58.786 "write": true, 00:18:58.786 "unmap": true, 00:18:58.786 "flush": true, 00:18:58.786 "reset": true, 00:18:58.786 "nvme_admin": false, 00:18:58.786 "nvme_io": false, 00:18:58.786 "nvme_io_md": false, 00:18:58.786 "write_zeroes": true, 00:18:58.786 "zcopy": false, 00:18:58.786 "get_zone_info": false, 00:18:58.786 "zone_management": false, 00:18:58.786 "zone_append": false, 00:18:58.786 "compare": false, 00:18:58.786 "compare_and_write": false, 00:18:58.786 "abort": false, 00:18:58.786 "seek_hole": false, 00:18:58.786 "seek_data": false, 00:18:58.786 "copy": false, 00:18:58.786 "nvme_iov_md": false 00:18:58.786 }, 00:18:58.786 "memory_domains": [ 00:18:58.786 { 00:18:58.786 "dma_device_id": "system", 00:18:58.786 "dma_device_type": 1 00:18:58.786 }, 00:18:58.786 { 00:18:58.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.786 "dma_device_type": 2 00:18:58.786 }, 00:18:58.786 { 00:18:58.786 "dma_device_id": "system", 00:18:58.786 "dma_device_type": 1 00:18:58.786 }, 00:18:58.786 { 00:18:58.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.786 "dma_device_type": 2 00:18:58.786 } 00:18:58.786 ], 00:18:58.786 "driver_specific": { 00:18:58.786 "raid": { 00:18:58.786 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:18:58.786 "strip_size_kb": 64, 00:18:58.786 "state": "online", 00:18:58.786 "raid_level": "raid0", 00:18:58.786 "superblock": true, 00:18:58.786 "num_base_bdevs": 2, 00:18:58.786 "num_base_bdevs_discovered": 2, 00:18:58.786 "num_base_bdevs_operational": 2, 00:18:58.786 "base_bdevs_list": [ 00:18:58.786 { 00:18:58.786 "name": "pt1", 00:18:58.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.786 "is_configured": true, 00:18:58.786 "data_offset": 2048, 00:18:58.786 "data_size": 63488 00:18:58.786 }, 00:18:58.786 { 00:18:58.786 "name": "pt2", 00:18:58.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.786 "is_configured": true, 00:18:58.786 "data_offset": 2048, 00:18:58.786 "data_size": 63488 00:18:58.786 } 00:18:58.786 ] 00:18:58.786 } 00:18:58.786 } 00:18:58.786 }' 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:58.786 pt2' 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:58.786 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.044 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.044 "name": "pt1", 00:18:59.044 "aliases": [ 00:18:59.044 "00000000-0000-0000-0000-000000000001" 00:18:59.044 ], 00:18:59.044 "product_name": "passthru", 00:18:59.044 "block_size": 512, 00:18:59.044 "num_blocks": 65536, 00:18:59.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.044 "assigned_rate_limits": { 00:18:59.044 "rw_ios_per_sec": 0, 00:18:59.044 "rw_mbytes_per_sec": 0, 00:18:59.044 "r_mbytes_per_sec": 0, 00:18:59.044 "w_mbytes_per_sec": 0 00:18:59.045 }, 00:18:59.045 "claimed": true, 00:18:59.045 "claim_type": "exclusive_write", 00:18:59.045 "zoned": false, 00:18:59.045 "supported_io_types": { 00:18:59.045 "read": true, 00:18:59.045 "write": true, 00:18:59.045 "unmap": true, 00:18:59.045 "flush": true, 00:18:59.045 "reset": true, 00:18:59.045 "nvme_admin": false, 00:18:59.045 "nvme_io": false, 00:18:59.045 "nvme_io_md": false, 00:18:59.045 "write_zeroes": true, 00:18:59.045 "zcopy": true, 00:18:59.045 "get_zone_info": false, 00:18:59.045 "zone_management": false, 00:18:59.045 "zone_append": false, 00:18:59.045 "compare": false, 00:18:59.045 "compare_and_write": false, 00:18:59.045 "abort": true, 00:18:59.045 "seek_hole": false, 00:18:59.045 "seek_data": false, 00:18:59.045 "copy": true, 00:18:59.045 "nvme_iov_md": false 00:18:59.045 }, 00:18:59.045 "memory_domains": [ 00:18:59.045 { 00:18:59.045 "dma_device_id": "system", 00:18:59.045 "dma_device_type": 1 00:18:59.045 }, 00:18:59.045 { 00:18:59.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.045 "dma_device_type": 2 00:18:59.045 } 00:18:59.045 ], 00:18:59.045 "driver_specific": { 00:18:59.045 "passthru": { 00:18:59.045 "name": "pt1", 00:18:59.045 "base_bdev_name": "malloc1" 00:18:59.045 } 00:18:59.045 } 00:18:59.045 }' 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:59.045 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.302 "name": "pt2", 00:18:59.302 "aliases": [ 00:18:59.302 "00000000-0000-0000-0000-000000000002" 00:18:59.302 ], 00:18:59.302 "product_name": "passthru", 00:18:59.302 "block_size": 512, 00:18:59.302 "num_blocks": 65536, 00:18:59.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.302 "assigned_rate_limits": { 00:18:59.302 "rw_ios_per_sec": 0, 00:18:59.302 "rw_mbytes_per_sec": 0, 00:18:59.302 "r_mbytes_per_sec": 0, 00:18:59.302 "w_mbytes_per_sec": 0 00:18:59.302 }, 00:18:59.302 "claimed": true, 00:18:59.302 "claim_type": "exclusive_write", 00:18:59.302 "zoned": false, 00:18:59.302 "supported_io_types": { 00:18:59.302 "read": true, 00:18:59.302 "write": true, 00:18:59.302 "unmap": true, 00:18:59.302 "flush": true, 00:18:59.302 "reset": true, 00:18:59.302 "nvme_admin": false, 00:18:59.302 "nvme_io": false, 00:18:59.302 "nvme_io_md": false, 00:18:59.302 "write_zeroes": true, 00:18:59.302 "zcopy": true, 00:18:59.302 "get_zone_info": false, 00:18:59.302 "zone_management": false, 00:18:59.302 "zone_append": false, 00:18:59.302 "compare": false, 00:18:59.302 "compare_and_write": false, 00:18:59.302 "abort": true, 00:18:59.302 "seek_hole": false, 00:18:59.302 "seek_data": false, 00:18:59.302 "copy": true, 00:18:59.302 "nvme_iov_md": false 00:18:59.302 }, 00:18:59.302 "memory_domains": [ 00:18:59.302 { 00:18:59.302 "dma_device_id": "system", 00:18:59.302 "dma_device_type": 1 00:18:59.302 }, 00:18:59.302 { 00:18:59.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.302 "dma_device_type": 2 00:18:59.302 } 00:18:59.302 ], 00:18:59.302 "driver_specific": { 00:18:59.302 "passthru": { 00:18:59.302 "name": "pt2", 00:18:59.302 "base_bdev_name": "malloc2" 00:18:59.302 } 00:18:59.302 } 00:18:59.302 }' 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.302 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.560 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:59.560 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:59.560 [2024-07-15 09:45:27.580750] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.560 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f69322d1-428e-11ef-a0af-c98d8ee52a94 00:18:59.560 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f69322d1-428e-11ef-a0af-c98d8ee52a94 ']' 00:18:59.560 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:59.817 [2024-07-15 09:45:27.792805] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.817 [2024-07-15 09:45:27.792830] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.817 [2024-07-15 09:45:27.792845] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.817 [2024-07-15 09:45:27.792857] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.817 [2024-07-15 09:45:27.792861] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1ad5034f00 name raid_bdev1, state offline 00:18:59.817 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.817 09:45:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:00.076 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:00.076 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:00.076 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.076 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:00.334 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.334 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:00.334 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:00.334 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.593 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.594 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.594 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.594 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:00.594 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:19:00.852 [2024-07-15 09:45:28.793197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:00.852 [2024-07-15 09:45:28.793894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:00.852 [2024-07-15 09:45:28.793920] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:00.852 [2024-07-15 09:45:28.793960] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:00.852 [2024-07-15 09:45:28.793970] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.852 [2024-07-15 09:45:28.793973] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1ad5034c80 name raid_bdev1, state configuring 00:19:00.852 request: 00:19:00.852 { 00:19:00.852 "name": "raid_bdev1", 00:19:00.852 "raid_level": "raid0", 00:19:00.852 "base_bdevs": [ 00:19:00.852 "malloc1", 00:19:00.852 "malloc2" 00:19:00.852 ], 00:19:00.852 "strip_size_kb": 64, 00:19:00.852 "superblock": false, 00:19:00.852 "method": "bdev_raid_create", 00:19:00.852 "req_id": 1 00:19:00.852 } 00:19:00.852 Got JSON-RPC error response 00:19:00.852 response: 00:19:00.852 { 00:19:00.852 "code": -17, 00:19:00.852 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:00.852 } 00:19:00.852 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:00.852 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:00.852 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:00.853 09:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:00.853 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.853 09:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:01.112 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:01.112 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:01.112 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.370 [2024-07-15 09:45:29.229355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.370 [2024-07-15 09:45:29.229409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.370 [2024-07-15 09:45:29.229418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1ad5034780 00:19:01.370 [2024-07-15 09:45:29.229425] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.370 [2024-07-15 09:45:29.230134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.370 [2024-07-15 09:45:29.230161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.370 [2024-07-15 09:45:29.230178] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:01.370 [2024-07-15 09:45:29.230189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:01.371 pt1 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.371 "name": "raid_bdev1", 00:19:01.371 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:19:01.371 "strip_size_kb": 64, 00:19:01.371 "state": "configuring", 00:19:01.371 "raid_level": "raid0", 00:19:01.371 "superblock": true, 00:19:01.371 "num_base_bdevs": 2, 00:19:01.371 "num_base_bdevs_discovered": 1, 00:19:01.371 "num_base_bdevs_operational": 2, 00:19:01.371 "base_bdevs_list": [ 00:19:01.371 { 00:19:01.371 "name": "pt1", 00:19:01.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.371 "is_configured": true, 00:19:01.371 "data_offset": 2048, 00:19:01.371 "data_size": 63488 00:19:01.371 }, 00:19:01.371 { 00:19:01.371 "name": null, 00:19:01.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.371 "is_configured": false, 00:19:01.371 "data_offset": 2048, 00:19:01.371 "data_size": 63488 00:19:01.371 } 00:19:01.371 ] 00:19:01.371 }' 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.371 09:45:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.939 [2024-07-15 09:45:29.909602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.939 [2024-07-15 09:45:29.909658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.939 [2024-07-15 09:45:29.909666] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1ad5034f00 00:19:01.939 [2024-07-15 09:45:29.909673] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.939 [2024-07-15 09:45:29.909769] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.939 [2024-07-15 09:45:29.909785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.939 [2024-07-15 09:45:29.909799] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:01.939 [2024-07-15 09:45:29.909805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.939 [2024-07-15 09:45:29.909825] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe1ad5035180 00:19:01.939 [2024-07-15 09:45:29.909829] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:01.939 [2024-07-15 09:45:29.909845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe1ad5097e20 00:19:01.939 [2024-07-15 09:45:29.909887] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe1ad5035180 00:19:01.939 [2024-07-15 09:45:29.909891] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe1ad5035180 00:19:01.939 [2024-07-15 09:45:29.909906] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.939 pt2 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.939 09:45:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.198 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.198 "name": "raid_bdev1", 00:19:02.198 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:19:02.198 "strip_size_kb": 64, 00:19:02.198 "state": "online", 00:19:02.198 "raid_level": "raid0", 00:19:02.198 "superblock": true, 00:19:02.198 "num_base_bdevs": 2, 00:19:02.198 "num_base_bdevs_discovered": 2, 00:19:02.198 "num_base_bdevs_operational": 2, 00:19:02.198 "base_bdevs_list": [ 00:19:02.198 { 00:19:02.198 "name": "pt1", 00:19:02.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.198 "is_configured": true, 00:19:02.198 "data_offset": 2048, 00:19:02.198 "data_size": 63488 00:19:02.198 }, 00:19:02.198 { 00:19:02.198 "name": "pt2", 00:19:02.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.198 "is_configured": true, 00:19:02.198 "data_offset": 2048, 00:19:02.198 "data_size": 63488 00:19:02.198 } 00:19:02.198 ] 00:19:02.198 }' 00:19:02.198 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.198 09:45:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:02.456 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:02.728 [2024-07-15 09:45:30.709928] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.728 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:02.728 "name": "raid_bdev1", 00:19:02.728 "aliases": [ 00:19:02.728 "f69322d1-428e-11ef-a0af-c98d8ee52a94" 00:19:02.728 ], 00:19:02.728 "product_name": "Raid Volume", 00:19:02.728 "block_size": 512, 00:19:02.728 "num_blocks": 126976, 00:19:02.728 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:19:02.728 "assigned_rate_limits": { 00:19:02.728 "rw_ios_per_sec": 0, 00:19:02.728 "rw_mbytes_per_sec": 0, 00:19:02.728 "r_mbytes_per_sec": 0, 00:19:02.728 "w_mbytes_per_sec": 0 00:19:02.728 }, 00:19:02.728 "claimed": false, 00:19:02.728 "zoned": false, 00:19:02.728 "supported_io_types": { 00:19:02.728 "read": true, 00:19:02.728 "write": true, 00:19:02.728 "unmap": true, 00:19:02.728 "flush": true, 00:19:02.728 "reset": true, 00:19:02.728 "nvme_admin": false, 00:19:02.728 "nvme_io": false, 00:19:02.728 "nvme_io_md": false, 00:19:02.728 "write_zeroes": true, 00:19:02.728 "zcopy": false, 00:19:02.728 "get_zone_info": false, 00:19:02.728 "zone_management": false, 00:19:02.728 "zone_append": false, 00:19:02.728 "compare": false, 00:19:02.728 "compare_and_write": false, 00:19:02.728 "abort": false, 00:19:02.728 "seek_hole": false, 00:19:02.728 "seek_data": false, 00:19:02.728 "copy": false, 00:19:02.728 "nvme_iov_md": false 00:19:02.728 }, 00:19:02.728 "memory_domains": [ 00:19:02.728 { 00:19:02.728 "dma_device_id": "system", 00:19:02.728 "dma_device_type": 1 00:19:02.728 }, 00:19:02.728 { 00:19:02.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.728 "dma_device_type": 2 00:19:02.728 }, 00:19:02.728 { 00:19:02.728 "dma_device_id": "system", 00:19:02.728 "dma_device_type": 1 00:19:02.728 }, 00:19:02.728 { 00:19:02.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.728 "dma_device_type": 2 00:19:02.728 } 00:19:02.728 ], 00:19:02.728 "driver_specific": { 00:19:02.728 "raid": { 00:19:02.728 "uuid": "f69322d1-428e-11ef-a0af-c98d8ee52a94", 00:19:02.728 "strip_size_kb": 64, 00:19:02.728 "state": "online", 00:19:02.728 "raid_level": "raid0", 00:19:02.728 "superblock": true, 00:19:02.728 "num_base_bdevs": 2, 00:19:02.728 "num_base_bdevs_discovered": 2, 00:19:02.728 "num_base_bdevs_operational": 2, 00:19:02.728 "base_bdevs_list": [ 00:19:02.728 { 00:19:02.728 "name": "pt1", 00:19:02.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.728 "is_configured": true, 00:19:02.728 "data_offset": 2048, 00:19:02.728 "data_size": 63488 00:19:02.728 }, 00:19:02.728 { 00:19:02.728 "name": "pt2", 00:19:02.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.728 "is_configured": true, 00:19:02.728 "data_offset": 2048, 00:19:02.728 "data_size": 63488 00:19:02.728 } 00:19:02.728 ] 00:19:02.728 } 00:19:02.728 } 00:19:02.728 }' 00:19:02.728 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:02.728 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:02.728 pt2' 00:19:02.728 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:02.728 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:02.728 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:02.986 "name": "pt1", 00:19:02.986 "aliases": [ 00:19:02.986 "00000000-0000-0000-0000-000000000001" 00:19:02.986 ], 00:19:02.986 "product_name": "passthru", 00:19:02.986 "block_size": 512, 00:19:02.986 "num_blocks": 65536, 00:19:02.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.986 "assigned_rate_limits": { 00:19:02.986 "rw_ios_per_sec": 0, 00:19:02.986 "rw_mbytes_per_sec": 0, 00:19:02.986 "r_mbytes_per_sec": 0, 00:19:02.986 "w_mbytes_per_sec": 0 00:19:02.986 }, 00:19:02.986 "claimed": true, 00:19:02.986 "claim_type": "exclusive_write", 00:19:02.986 "zoned": false, 00:19:02.986 "supported_io_types": { 00:19:02.986 "read": true, 00:19:02.986 "write": true, 00:19:02.986 "unmap": true, 00:19:02.986 "flush": true, 00:19:02.986 "reset": true, 00:19:02.986 "nvme_admin": false, 00:19:02.986 "nvme_io": false, 00:19:02.986 "nvme_io_md": false, 00:19:02.986 "write_zeroes": true, 00:19:02.986 "zcopy": true, 00:19:02.986 "get_zone_info": false, 00:19:02.986 "zone_management": false, 00:19:02.986 "zone_append": false, 00:19:02.986 "compare": false, 00:19:02.986 "compare_and_write": false, 00:19:02.986 "abort": true, 00:19:02.986 "seek_hole": false, 00:19:02.986 "seek_data": false, 00:19:02.986 "copy": true, 00:19:02.986 "nvme_iov_md": false 00:19:02.986 }, 00:19:02.986 "memory_domains": [ 00:19:02.986 { 00:19:02.986 "dma_device_id": "system", 00:19:02.986 "dma_device_type": 1 00:19:02.986 }, 00:19:02.986 { 00:19:02.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.986 "dma_device_type": 2 00:19:02.986 } 00:19:02.986 ], 00:19:02.986 "driver_specific": { 00:19:02.986 "passthru": { 00:19:02.986 "name": "pt1", 00:19:02.986 "base_bdev_name": "malloc1" 00:19:02.986 } 00:19:02.986 } 00:19:02.986 }' 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.986 09:45:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:02.986 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:03.245 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:03.245 "name": "pt2", 00:19:03.245 "aliases": [ 00:19:03.245 "00000000-0000-0000-0000-000000000002" 00:19:03.245 ], 00:19:03.245 "product_name": "passthru", 00:19:03.245 "block_size": 512, 00:19:03.246 "num_blocks": 65536, 00:19:03.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.246 "assigned_rate_limits": { 00:19:03.246 "rw_ios_per_sec": 0, 00:19:03.246 "rw_mbytes_per_sec": 0, 00:19:03.246 "r_mbytes_per_sec": 0, 00:19:03.246 "w_mbytes_per_sec": 0 00:19:03.246 }, 00:19:03.246 "claimed": true, 00:19:03.246 "claim_type": "exclusive_write", 00:19:03.246 "zoned": false, 00:19:03.246 "supported_io_types": { 00:19:03.246 "read": true, 00:19:03.246 "write": true, 00:19:03.246 "unmap": true, 00:19:03.246 "flush": true, 00:19:03.246 "reset": true, 00:19:03.246 "nvme_admin": false, 00:19:03.246 "nvme_io": false, 00:19:03.246 "nvme_io_md": false, 00:19:03.246 "write_zeroes": true, 00:19:03.246 "zcopy": true, 00:19:03.246 "get_zone_info": false, 00:19:03.246 "zone_management": false, 00:19:03.246 "zone_append": false, 00:19:03.246 "compare": false, 00:19:03.246 "compare_and_write": false, 00:19:03.246 "abort": true, 00:19:03.246 "seek_hole": false, 00:19:03.246 "seek_data": false, 00:19:03.246 "copy": true, 00:19:03.246 "nvme_iov_md": false 00:19:03.246 }, 00:19:03.246 "memory_domains": [ 00:19:03.246 { 00:19:03.246 "dma_device_id": "system", 00:19:03.246 "dma_device_type": 1 00:19:03.246 }, 00:19:03.246 { 00:19:03.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.246 "dma_device_type": 2 00:19:03.246 } 00:19:03.246 ], 00:19:03.246 "driver_specific": { 00:19:03.246 "passthru": { 00:19:03.246 "name": "pt2", 00:19:03.246 "base_bdev_name": "malloc2" 00:19:03.246 } 00:19:03.246 } 00:19:03.246 }' 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.246 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:03.505 [2024-07-15 09:45:31.490173] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f69322d1-428e-11ef-a0af-c98d8ee52a94 '!=' f69322d1-428e-11ef-a0af-c98d8ee52a94 ']' 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49205 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49205 ']' 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49205 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49205 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49205' 00:19:03.505 killing process with pid 49205 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49205 00:19:03.505 [2024-07-15 09:45:31.524187] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.505 [2024-07-15 09:45:31.524206] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.505 [2024-07-15 09:45:31.524235] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.505 [2024-07-15 09:45:31.524239] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1ad5035180 name raid_bdev1, state offline 00:19:03.505 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49205 00:19:03.505 [2024-07-15 09:45:31.542015] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.764 09:45:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:03.764 00:19:03.764 real 0m7.586s 00:19:03.764 user 0m12.386s 00:19:03.764 sys 0m2.003s 00:19:03.764 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:03.764 09:45:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.764 ************************************ 00:19:03.764 END TEST raid_superblock_test 00:19:03.764 ************************************ 00:19:03.764 09:45:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:03.764 09:45:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:19:03.764 09:45:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:03.764 09:45:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.764 09:45:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.023 ************************************ 00:19:04.023 START TEST raid_read_error_test 00:19:04.023 ************************************ 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KwU2QBKRX5 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49470 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49470 /var/tmp/spdk-raid.sock 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49470 ']' 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.023 09:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.023 [2024-07-15 09:45:31.876441] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:04.023 [2024-07-15 09:45:31.876762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:04.591 EAL: TSC is not safe to use in SMP mode 00:19:04.591 EAL: TSC is not invariant 00:19:04.591 [2024-07-15 09:45:32.608448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.850 [2024-07-15 09:45:32.722794] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:04.850 [2024-07-15 09:45:32.725210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.850 [2024-07-15 09:45:32.725894] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.850 [2024-07-15 09:45:32.725905] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.850 09:45:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.850 09:45:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:04.850 09:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:04.850 09:45:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:05.107 BaseBdev1_malloc 00:19:05.107 09:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:05.373 true 00:19:05.373 09:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:05.373 [2024-07-15 09:45:33.405076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:05.373 [2024-07-15 09:45:33.405155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.373 [2024-07-15 09:45:33.405188] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b412a34780 00:19:05.373 [2024-07-15 09:45:33.405196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.373 [2024-07-15 09:45:33.405961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.373 [2024-07-15 09:45:33.405989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.373 BaseBdev1 00:19:05.373 09:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:05.373 09:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:05.632 BaseBdev2_malloc 00:19:05.632 09:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:05.891 true 00:19:05.891 09:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:06.226 [2024-07-15 09:45:34.013269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:06.226 [2024-07-15 09:45:34.013334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.226 [2024-07-15 09:45:34.013368] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b412a34c80 00:19:06.226 [2024-07-15 09:45:34.013375] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.226 [2024-07-15 09:45:34.014152] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.226 [2024-07-15 09:45:34.014182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:06.226 BaseBdev2 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:19:06.226 [2024-07-15 09:45:34.221368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.226 [2024-07-15 09:45:34.222175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.226 [2024-07-15 09:45:34.222272] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9b412a34f00 00:19:06.226 [2024-07-15 09:45:34.222277] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:06.226 [2024-07-15 09:45:34.222313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9b412aa0e20 00:19:06.226 [2024-07-15 09:45:34.222394] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9b412a34f00 00:19:06.226 [2024-07-15 09:45:34.222397] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9b412a34f00 00:19:06.226 [2024-07-15 09:45:34.222432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.226 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.509 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.509 "name": "raid_bdev1", 00:19:06.509 "uuid": "fb64399f-428e-11ef-a0af-c98d8ee52a94", 00:19:06.509 "strip_size_kb": 64, 00:19:06.509 "state": "online", 00:19:06.509 "raid_level": "raid0", 00:19:06.509 "superblock": true, 00:19:06.509 "num_base_bdevs": 2, 00:19:06.509 "num_base_bdevs_discovered": 2, 00:19:06.509 "num_base_bdevs_operational": 2, 00:19:06.509 "base_bdevs_list": [ 00:19:06.509 { 00:19:06.509 "name": "BaseBdev1", 00:19:06.509 "uuid": "4ae937da-982c-5c55-91f5-97f3d980395b", 00:19:06.509 "is_configured": true, 00:19:06.509 "data_offset": 2048, 00:19:06.509 "data_size": 63488 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "name": "BaseBdev2", 00:19:06.509 "uuid": "8f9d5cb1-5281-c351-8fe4-db5b7e138360", 00:19:06.509 "is_configured": true, 00:19:06.509 "data_offset": 2048, 00:19:06.509 "data_size": 63488 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 }' 00:19:06.509 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.509 09:45:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.841 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:06.841 09:45:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:07.116 [2024-07-15 09:45:34.917658] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9b412aa0ec0 00:19:08.123 09:45:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:08.123 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:08.123 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.124 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.469 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.469 "name": "raid_bdev1", 00:19:08.470 "uuid": "fb64399f-428e-11ef-a0af-c98d8ee52a94", 00:19:08.470 "strip_size_kb": 64, 00:19:08.470 "state": "online", 00:19:08.470 "raid_level": "raid0", 00:19:08.470 "superblock": true, 00:19:08.470 "num_base_bdevs": 2, 00:19:08.470 "num_base_bdevs_discovered": 2, 00:19:08.470 "num_base_bdevs_operational": 2, 00:19:08.470 "base_bdevs_list": [ 00:19:08.470 { 00:19:08.470 "name": "BaseBdev1", 00:19:08.470 "uuid": "4ae937da-982c-5c55-91f5-97f3d980395b", 00:19:08.470 "is_configured": true, 00:19:08.470 "data_offset": 2048, 00:19:08.470 "data_size": 63488 00:19:08.470 }, 00:19:08.470 { 00:19:08.470 "name": "BaseBdev2", 00:19:08.470 "uuid": "8f9d5cb1-5281-c351-8fe4-db5b7e138360", 00:19:08.470 "is_configured": true, 00:19:08.470 "data_offset": 2048, 00:19:08.470 "data_size": 63488 00:19:08.470 } 00:19:08.470 ] 00:19:08.470 }' 00:19:08.470 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.470 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:08.988 [2024-07-15 09:45:36.820598] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.988 [2024-07-15 09:45:36.820634] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.988 [2024-07-15 09:45:36.820964] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.988 [2024-07-15 09:45:36.820972] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.988 [2024-07-15 09:45:36.820978] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.988 [2024-07-15 09:45:36.820982] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9b412a34f00 name raid_bdev1, state offline 00:19:08.988 0 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49470 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49470 ']' 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49470 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49470 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:08.988 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49470' 00:19:08.988 killing process with pid 49470 00:19:08.989 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49470 00:19:08.989 [2024-07-15 09:45:36.848279] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.989 09:45:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49470 00:19:08.989 [2024-07-15 09:45:36.864929] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KwU2QBKRX5 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:09.248 09:45:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:19:09.248 00:19:09.249 real 0m5.285s 00:19:09.249 user 0m7.414s 00:19:09.249 sys 0m1.369s 00:19:09.249 09:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.249 ************************************ 00:19:09.249 END TEST raid_read_error_test 00:19:09.249 ************************************ 00:19:09.249 09:45:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.249 09:45:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:09.249 09:45:37 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:19:09.249 09:45:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:09.249 09:45:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.249 09:45:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.249 ************************************ 00:19:09.249 START TEST raid_write_error_test 00:19:09.249 ************************************ 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hkFFe1FRAq 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49594 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49594 /var/tmp/spdk-raid.sock 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49594 ']' 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.249 09:45:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.249 [2024-07-15 09:45:37.219874] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:09.249 [2024-07-15 09:45:37.220130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:10.185 EAL: TSC is not safe to use in SMP mode 00:19:10.185 EAL: TSC is not invariant 00:19:10.185 [2024-07-15 09:45:37.940595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.185 [2024-07-15 09:45:38.052417] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:10.185 [2024-07-15 09:45:38.054909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.185 [2024-07-15 09:45:38.055630] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.185 [2024-07-15 09:45:38.055643] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.185 09:45:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.185 09:45:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:10.185 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:10.185 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:10.443 BaseBdev1_malloc 00:19:10.443 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:10.443 true 00:19:10.701 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:10.701 [2024-07-15 09:45:38.718930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:10.701 [2024-07-15 09:45:38.719011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.701 [2024-07-15 09:45:38.719042] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x155825a34780 00:19:10.701 [2024-07-15 09:45:38.719049] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.701 [2024-07-15 09:45:38.719816] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.701 [2024-07-15 09:45:38.719849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:10.701 BaseBdev1 00:19:10.701 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:10.701 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:10.961 BaseBdev2_malloc 00:19:10.961 09:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:11.220 true 00:19:11.220 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:11.478 [2024-07-15 09:45:39.411148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:11.478 [2024-07-15 09:45:39.411217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.478 [2024-07-15 09:45:39.411253] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x155825a34c80 00:19:11.478 [2024-07-15 09:45:39.411260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.478 [2024-07-15 09:45:39.412108] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.478 [2024-07-15 09:45:39.412141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.478 BaseBdev2 00:19:11.478 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:19:11.737 [2024-07-15 09:45:39.663225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.737 [2024-07-15 09:45:39.663911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.737 [2024-07-15 09:45:39.663982] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x155825a34f00 00:19:11.737 [2024-07-15 09:45:39.663988] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:11.737 [2024-07-15 09:45:39.664023] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x155825aa0e20 00:19:11.737 [2024-07-15 09:45:39.664097] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x155825a34f00 00:19:11.737 [2024-07-15 09:45:39.664101] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x155825a34f00 00:19:11.737 [2024-07-15 09:45:39.664123] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.737 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.994 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.994 "name": "raid_bdev1", 00:19:11.994 "uuid": "fea29657-428e-11ef-a0af-c98d8ee52a94", 00:19:11.994 "strip_size_kb": 64, 00:19:11.994 "state": "online", 00:19:11.994 "raid_level": "raid0", 00:19:11.994 "superblock": true, 00:19:11.994 "num_base_bdevs": 2, 00:19:11.994 "num_base_bdevs_discovered": 2, 00:19:11.994 "num_base_bdevs_operational": 2, 00:19:11.994 "base_bdevs_list": [ 00:19:11.994 { 00:19:11.994 "name": "BaseBdev1", 00:19:11.994 "uuid": "0d44f604-84b2-5b57-aebd-3c48c7f25355", 00:19:11.994 "is_configured": true, 00:19:11.994 "data_offset": 2048, 00:19:11.994 "data_size": 63488 00:19:11.994 }, 00:19:11.994 { 00:19:11.994 "name": "BaseBdev2", 00:19:11.994 "uuid": "5b5abb5e-a07c-4558-b0cc-edd387fc8bed", 00:19:11.994 "is_configured": true, 00:19:11.994 "data_offset": 2048, 00:19:11.994 "data_size": 63488 00:19:11.994 } 00:19:11.994 ] 00:19:11.994 }' 00:19:11.994 09:45:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.994 09:45:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.251 09:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:12.251 09:45:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:12.251 [2024-07-15 09:45:40.315505] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x155825aa0ec0 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.625 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.883 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.883 "name": "raid_bdev1", 00:19:13.883 "uuid": "fea29657-428e-11ef-a0af-c98d8ee52a94", 00:19:13.883 "strip_size_kb": 64, 00:19:13.883 "state": "online", 00:19:13.883 "raid_level": "raid0", 00:19:13.883 "superblock": true, 00:19:13.883 "num_base_bdevs": 2, 00:19:13.883 "num_base_bdevs_discovered": 2, 00:19:13.883 "num_base_bdevs_operational": 2, 00:19:13.883 "base_bdevs_list": [ 00:19:13.883 { 00:19:13.883 "name": "BaseBdev1", 00:19:13.883 "uuid": "0d44f604-84b2-5b57-aebd-3c48c7f25355", 00:19:13.883 "is_configured": true, 00:19:13.883 "data_offset": 2048, 00:19:13.883 "data_size": 63488 00:19:13.883 }, 00:19:13.883 { 00:19:13.883 "name": "BaseBdev2", 00:19:13.883 "uuid": "5b5abb5e-a07c-4558-b0cc-edd387fc8bed", 00:19:13.883 "is_configured": true, 00:19:13.883 "data_offset": 2048, 00:19:13.883 "data_size": 63488 00:19:13.883 } 00:19:13.883 ] 00:19:13.883 }' 00:19:13.883 09:45:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.883 09:45:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.141 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:14.141 [2024-07-15 09:45:42.230620] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.141 [2024-07-15 09:45:42.230661] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.141 [2024-07-15 09:45:42.231009] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.141 [2024-07-15 09:45:42.231017] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.141 [2024-07-15 09:45:42.231025] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.141 [2024-07-15 09:45:42.231029] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x155825a34f00 name raid_bdev1, state offline 00:19:14.399 0 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49594 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49594 ']' 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49594 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49594 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:14.399 killing process with pid 49594 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49594' 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49594 00:19:14.399 [2024-07-15 09:45:42.271493] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.399 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49594 00:19:14.399 [2024-07-15 09:45:42.290675] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hkFFe1FRAq 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:19:14.658 00:19:14.658 real 0m5.376s 00:19:14.658 user 0m7.596s 00:19:14.658 sys 0m1.337s 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.658 09:45:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.658 ************************************ 00:19:14.658 END TEST raid_write_error_test 00:19:14.658 ************************************ 00:19:14.658 09:45:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:14.658 09:45:42 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:14.658 09:45:42 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:19:14.658 09:45:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:14.658 09:45:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.658 09:45:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.658 ************************************ 00:19:14.658 START TEST raid_state_function_test 00:19:14.658 ************************************ 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:14.658 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49716 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:14.659 Process raid pid: 49716 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49716' 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49716 /var/tmp/spdk-raid.sock 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49716 ']' 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.659 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.659 [2024-07-15 09:45:42.654828] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:14.659 [2024-07-15 09:45:42.655151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:15.597 EAL: TSC is not safe to use in SMP mode 00:19:15.597 EAL: TSC is not invariant 00:19:15.597 [2024-07-15 09:45:43.377555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.597 [2024-07-15 09:45:43.494742] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:15.597 [2024-07-15 09:45:43.497261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.597 [2024-07-15 09:45:43.497979] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.597 [2024-07-15 09:45:43.497990] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.597 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.597 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:15.597 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:15.855 [2024-07-15 09:45:43.749043] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.855 [2024-07-15 09:45:43.749104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.855 [2024-07-15 09:45:43.749108] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.855 [2024-07-15 09:45:43.749115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.855 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.113 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.113 "name": "Existed_Raid", 00:19:16.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.113 "strip_size_kb": 64, 00:19:16.113 "state": "configuring", 00:19:16.113 "raid_level": "concat", 00:19:16.114 "superblock": false, 00:19:16.114 "num_base_bdevs": 2, 00:19:16.114 "num_base_bdevs_discovered": 0, 00:19:16.114 "num_base_bdevs_operational": 2, 00:19:16.114 "base_bdevs_list": [ 00:19:16.114 { 00:19:16.114 "name": "BaseBdev1", 00:19:16.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.114 "is_configured": false, 00:19:16.114 "data_offset": 0, 00:19:16.114 "data_size": 0 00:19:16.114 }, 00:19:16.114 { 00:19:16.114 "name": "BaseBdev2", 00:19:16.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.114 "is_configured": false, 00:19:16.114 "data_offset": 0, 00:19:16.114 "data_size": 0 00:19:16.114 } 00:19:16.114 ] 00:19:16.114 }' 00:19:16.114 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.114 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.371 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:16.372 [2024-07-15 09:45:44.405213] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.372 [2024-07-15 09:45:44.405246] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a5d8e034500 name Existed_Raid, state configuring 00:19:16.372 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:16.629 [2024-07-15 09:45:44.601254] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:16.629 [2024-07-15 09:45:44.601312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:16.629 [2024-07-15 09:45:44.601316] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.629 [2024-07-15 09:45:44.601323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.629 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:16.886 [2024-07-15 09:45:44.794452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.886 BaseBdev1 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:16.886 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.144 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:17.144 [ 00:19:17.144 { 00:19:17.144 "name": "BaseBdev1", 00:19:17.144 "aliases": [ 00:19:17.144 "01b160fd-428f-11ef-a0af-c98d8ee52a94" 00:19:17.144 ], 00:19:17.144 "product_name": "Malloc disk", 00:19:17.144 "block_size": 512, 00:19:17.144 "num_blocks": 65536, 00:19:17.144 "uuid": "01b160fd-428f-11ef-a0af-c98d8ee52a94", 00:19:17.144 "assigned_rate_limits": { 00:19:17.144 "rw_ios_per_sec": 0, 00:19:17.144 "rw_mbytes_per_sec": 0, 00:19:17.144 "r_mbytes_per_sec": 0, 00:19:17.144 "w_mbytes_per_sec": 0 00:19:17.144 }, 00:19:17.144 "claimed": true, 00:19:17.144 "claim_type": "exclusive_write", 00:19:17.144 "zoned": false, 00:19:17.144 "supported_io_types": { 00:19:17.144 "read": true, 00:19:17.144 "write": true, 00:19:17.144 "unmap": true, 00:19:17.144 "flush": true, 00:19:17.144 "reset": true, 00:19:17.144 "nvme_admin": false, 00:19:17.144 "nvme_io": false, 00:19:17.144 "nvme_io_md": false, 00:19:17.144 "write_zeroes": true, 00:19:17.144 "zcopy": true, 00:19:17.144 "get_zone_info": false, 00:19:17.144 "zone_management": false, 00:19:17.144 "zone_append": false, 00:19:17.144 "compare": false, 00:19:17.144 "compare_and_write": false, 00:19:17.144 "abort": true, 00:19:17.144 "seek_hole": false, 00:19:17.144 "seek_data": false, 00:19:17.144 "copy": true, 00:19:17.144 "nvme_iov_md": false 00:19:17.144 }, 00:19:17.144 "memory_domains": [ 00:19:17.144 { 00:19:17.144 "dma_device_id": "system", 00:19:17.144 "dma_device_type": 1 00:19:17.144 }, 00:19:17.144 { 00:19:17.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.144 "dma_device_type": 2 00:19:17.144 } 00:19:17.144 ], 00:19:17.144 "driver_specific": {} 00:19:17.144 } 00:19:17.144 ] 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.144 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.403 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.403 "name": "Existed_Raid", 00:19:17.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.403 "strip_size_kb": 64, 00:19:17.403 "state": "configuring", 00:19:17.403 "raid_level": "concat", 00:19:17.403 "superblock": false, 00:19:17.403 "num_base_bdevs": 2, 00:19:17.403 "num_base_bdevs_discovered": 1, 00:19:17.403 "num_base_bdevs_operational": 2, 00:19:17.403 "base_bdevs_list": [ 00:19:17.403 { 00:19:17.403 "name": "BaseBdev1", 00:19:17.403 "uuid": "01b160fd-428f-11ef-a0af-c98d8ee52a94", 00:19:17.403 "is_configured": true, 00:19:17.403 "data_offset": 0, 00:19:17.403 "data_size": 65536 00:19:17.403 }, 00:19:17.403 { 00:19:17.403 "name": "BaseBdev2", 00:19:17.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.403 "is_configured": false, 00:19:17.403 "data_offset": 0, 00:19:17.403 "data_size": 0 00:19:17.403 } 00:19:17.403 ] 00:19:17.403 }' 00:19:17.403 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.403 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.661 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:17.920 [2024-07-15 09:45:45.885567] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:17.920 [2024-07-15 09:45:45.885612] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a5d8e034500 name Existed_Raid, state configuring 00:19:17.920 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:18.194 [2024-07-15 09:45:46.129667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.194 [2024-07-15 09:45:46.130664] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:18.194 [2024-07-15 09:45:46.130714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.194 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.456 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.456 "name": "Existed_Raid", 00:19:18.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.456 "strip_size_kb": 64, 00:19:18.456 "state": "configuring", 00:19:18.456 "raid_level": "concat", 00:19:18.456 "superblock": false, 00:19:18.456 "num_base_bdevs": 2, 00:19:18.456 "num_base_bdevs_discovered": 1, 00:19:18.456 "num_base_bdevs_operational": 2, 00:19:18.456 "base_bdevs_list": [ 00:19:18.456 { 00:19:18.456 "name": "BaseBdev1", 00:19:18.456 "uuid": "01b160fd-428f-11ef-a0af-c98d8ee52a94", 00:19:18.456 "is_configured": true, 00:19:18.456 "data_offset": 0, 00:19:18.456 "data_size": 65536 00:19:18.456 }, 00:19:18.456 { 00:19:18.456 "name": "BaseBdev2", 00:19:18.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.456 "is_configured": false, 00:19:18.456 "data_offset": 0, 00:19:18.456 "data_size": 0 00:19:18.456 } 00:19:18.456 ] 00:19:18.456 }' 00:19:18.456 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.456 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.715 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:18.973 [2024-07-15 09:45:46.925992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.973 [2024-07-15 09:45:46.926029] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a5d8e034a00 00:19:18.973 [2024-07-15 09:45:46.926033] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:18.974 [2024-07-15 09:45:46.926051] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a5d8e097e20 00:19:18.974 [2024-07-15 09:45:46.926150] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a5d8e034a00 00:19:18.974 [2024-07-15 09:45:46.926154] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a5d8e034a00 00:19:18.974 [2024-07-15 09:45:46.926184] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.974 BaseBdev2 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:18.974 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.232 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:19.491 [ 00:19:19.491 { 00:19:19.491 "name": "BaseBdev2", 00:19:19.491 "aliases": [ 00:19:19.491 "02f6c6b4-428f-11ef-a0af-c98d8ee52a94" 00:19:19.491 ], 00:19:19.491 "product_name": "Malloc disk", 00:19:19.491 "block_size": 512, 00:19:19.491 "num_blocks": 65536, 00:19:19.491 "uuid": "02f6c6b4-428f-11ef-a0af-c98d8ee52a94", 00:19:19.491 "assigned_rate_limits": { 00:19:19.491 "rw_ios_per_sec": 0, 00:19:19.491 "rw_mbytes_per_sec": 0, 00:19:19.491 "r_mbytes_per_sec": 0, 00:19:19.491 "w_mbytes_per_sec": 0 00:19:19.491 }, 00:19:19.491 "claimed": true, 00:19:19.491 "claim_type": "exclusive_write", 00:19:19.491 "zoned": false, 00:19:19.491 "supported_io_types": { 00:19:19.491 "read": true, 00:19:19.491 "write": true, 00:19:19.491 "unmap": true, 00:19:19.491 "flush": true, 00:19:19.491 "reset": true, 00:19:19.491 "nvme_admin": false, 00:19:19.491 "nvme_io": false, 00:19:19.491 "nvme_io_md": false, 00:19:19.491 "write_zeroes": true, 00:19:19.491 "zcopy": true, 00:19:19.491 "get_zone_info": false, 00:19:19.491 "zone_management": false, 00:19:19.491 "zone_append": false, 00:19:19.491 "compare": false, 00:19:19.491 "compare_and_write": false, 00:19:19.491 "abort": true, 00:19:19.491 "seek_hole": false, 00:19:19.491 "seek_data": false, 00:19:19.491 "copy": true, 00:19:19.491 "nvme_iov_md": false 00:19:19.491 }, 00:19:19.491 "memory_domains": [ 00:19:19.491 { 00:19:19.491 "dma_device_id": "system", 00:19:19.491 "dma_device_type": 1 00:19:19.491 }, 00:19:19.491 { 00:19:19.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.491 "dma_device_type": 2 00:19:19.491 } 00:19:19.491 ], 00:19:19.491 "driver_specific": {} 00:19:19.491 } 00:19:19.491 ] 00:19:19.491 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:19.491 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.492 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.750 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.750 "name": "Existed_Raid", 00:19:19.750 "uuid": "02f6ce55-428f-11ef-a0af-c98d8ee52a94", 00:19:19.750 "strip_size_kb": 64, 00:19:19.750 "state": "online", 00:19:19.750 "raid_level": "concat", 00:19:19.750 "superblock": false, 00:19:19.750 "num_base_bdevs": 2, 00:19:19.750 "num_base_bdevs_discovered": 2, 00:19:19.750 "num_base_bdevs_operational": 2, 00:19:19.750 "base_bdevs_list": [ 00:19:19.750 { 00:19:19.750 "name": "BaseBdev1", 00:19:19.750 "uuid": "01b160fd-428f-11ef-a0af-c98d8ee52a94", 00:19:19.750 "is_configured": true, 00:19:19.750 "data_offset": 0, 00:19:19.750 "data_size": 65536 00:19:19.750 }, 00:19:19.750 { 00:19:19.750 "name": "BaseBdev2", 00:19:19.750 "uuid": "02f6c6b4-428f-11ef-a0af-c98d8ee52a94", 00:19:19.750 "is_configured": true, 00:19:19.750 "data_offset": 0, 00:19:19.750 "data_size": 65536 00:19:19.750 } 00:19:19.750 ] 00:19:19.750 }' 00:19:19.750 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.750 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:20.009 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:20.009 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:20.268 [2024-07-15 09:45:48.194168] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.268 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:20.268 "name": "Existed_Raid", 00:19:20.268 "aliases": [ 00:19:20.268 "02f6ce55-428f-11ef-a0af-c98d8ee52a94" 00:19:20.268 ], 00:19:20.268 "product_name": "Raid Volume", 00:19:20.268 "block_size": 512, 00:19:20.268 "num_blocks": 131072, 00:19:20.268 "uuid": "02f6ce55-428f-11ef-a0af-c98d8ee52a94", 00:19:20.268 "assigned_rate_limits": { 00:19:20.268 "rw_ios_per_sec": 0, 00:19:20.268 "rw_mbytes_per_sec": 0, 00:19:20.268 "r_mbytes_per_sec": 0, 00:19:20.268 "w_mbytes_per_sec": 0 00:19:20.268 }, 00:19:20.268 "claimed": false, 00:19:20.268 "zoned": false, 00:19:20.268 "supported_io_types": { 00:19:20.268 "read": true, 00:19:20.268 "write": true, 00:19:20.268 "unmap": true, 00:19:20.268 "flush": true, 00:19:20.268 "reset": true, 00:19:20.268 "nvme_admin": false, 00:19:20.268 "nvme_io": false, 00:19:20.268 "nvme_io_md": false, 00:19:20.268 "write_zeroes": true, 00:19:20.268 "zcopy": false, 00:19:20.268 "get_zone_info": false, 00:19:20.268 "zone_management": false, 00:19:20.268 "zone_append": false, 00:19:20.268 "compare": false, 00:19:20.268 "compare_and_write": false, 00:19:20.268 "abort": false, 00:19:20.268 "seek_hole": false, 00:19:20.268 "seek_data": false, 00:19:20.268 "copy": false, 00:19:20.268 "nvme_iov_md": false 00:19:20.268 }, 00:19:20.268 "memory_domains": [ 00:19:20.268 { 00:19:20.268 "dma_device_id": "system", 00:19:20.268 "dma_device_type": 1 00:19:20.268 }, 00:19:20.268 { 00:19:20.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.268 "dma_device_type": 2 00:19:20.268 }, 00:19:20.268 { 00:19:20.268 "dma_device_id": "system", 00:19:20.268 "dma_device_type": 1 00:19:20.268 }, 00:19:20.268 { 00:19:20.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.268 "dma_device_type": 2 00:19:20.268 } 00:19:20.268 ], 00:19:20.268 "driver_specific": { 00:19:20.268 "raid": { 00:19:20.268 "uuid": "02f6ce55-428f-11ef-a0af-c98d8ee52a94", 00:19:20.268 "strip_size_kb": 64, 00:19:20.268 "state": "online", 00:19:20.268 "raid_level": "concat", 00:19:20.268 "superblock": false, 00:19:20.268 "num_base_bdevs": 2, 00:19:20.268 "num_base_bdevs_discovered": 2, 00:19:20.268 "num_base_bdevs_operational": 2, 00:19:20.268 "base_bdevs_list": [ 00:19:20.268 { 00:19:20.268 "name": "BaseBdev1", 00:19:20.268 "uuid": "01b160fd-428f-11ef-a0af-c98d8ee52a94", 00:19:20.268 "is_configured": true, 00:19:20.268 "data_offset": 0, 00:19:20.268 "data_size": 65536 00:19:20.268 }, 00:19:20.268 { 00:19:20.268 "name": "BaseBdev2", 00:19:20.268 "uuid": "02f6c6b4-428f-11ef-a0af-c98d8ee52a94", 00:19:20.268 "is_configured": true, 00:19:20.268 "data_offset": 0, 00:19:20.268 "data_size": 65536 00:19:20.268 } 00:19:20.268 ] 00:19:20.268 } 00:19:20.268 } 00:19:20.268 }' 00:19:20.268 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:20.268 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:20.268 BaseBdev2' 00:19:20.268 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:20.268 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:20.268 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:20.527 "name": "BaseBdev1", 00:19:20.527 "aliases": [ 00:19:20.527 "01b160fd-428f-11ef-a0af-c98d8ee52a94" 00:19:20.527 ], 00:19:20.527 "product_name": "Malloc disk", 00:19:20.527 "block_size": 512, 00:19:20.527 "num_blocks": 65536, 00:19:20.527 "uuid": "01b160fd-428f-11ef-a0af-c98d8ee52a94", 00:19:20.527 "assigned_rate_limits": { 00:19:20.527 "rw_ios_per_sec": 0, 00:19:20.527 "rw_mbytes_per_sec": 0, 00:19:20.527 "r_mbytes_per_sec": 0, 00:19:20.527 "w_mbytes_per_sec": 0 00:19:20.527 }, 00:19:20.527 "claimed": true, 00:19:20.527 "claim_type": "exclusive_write", 00:19:20.527 "zoned": false, 00:19:20.527 "supported_io_types": { 00:19:20.527 "read": true, 00:19:20.527 "write": true, 00:19:20.527 "unmap": true, 00:19:20.527 "flush": true, 00:19:20.527 "reset": true, 00:19:20.527 "nvme_admin": false, 00:19:20.527 "nvme_io": false, 00:19:20.527 "nvme_io_md": false, 00:19:20.527 "write_zeroes": true, 00:19:20.527 "zcopy": true, 00:19:20.527 "get_zone_info": false, 00:19:20.527 "zone_management": false, 00:19:20.527 "zone_append": false, 00:19:20.527 "compare": false, 00:19:20.527 "compare_and_write": false, 00:19:20.527 "abort": true, 00:19:20.527 "seek_hole": false, 00:19:20.527 "seek_data": false, 00:19:20.527 "copy": true, 00:19:20.527 "nvme_iov_md": false 00:19:20.527 }, 00:19:20.527 "memory_domains": [ 00:19:20.527 { 00:19:20.527 "dma_device_id": "system", 00:19:20.527 "dma_device_type": 1 00:19:20.527 }, 00:19:20.527 { 00:19:20.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.527 "dma_device_type": 2 00:19:20.527 } 00:19:20.527 ], 00:19:20.527 "driver_specific": {} 00:19:20.527 }' 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:20.527 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:20.786 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:20.786 "name": "BaseBdev2", 00:19:20.786 "aliases": [ 00:19:20.786 "02f6c6b4-428f-11ef-a0af-c98d8ee52a94" 00:19:20.786 ], 00:19:20.786 "product_name": "Malloc disk", 00:19:20.786 "block_size": 512, 00:19:20.786 "num_blocks": 65536, 00:19:20.786 "uuid": "02f6c6b4-428f-11ef-a0af-c98d8ee52a94", 00:19:20.786 "assigned_rate_limits": { 00:19:20.786 "rw_ios_per_sec": 0, 00:19:20.786 "rw_mbytes_per_sec": 0, 00:19:20.786 "r_mbytes_per_sec": 0, 00:19:20.786 "w_mbytes_per_sec": 0 00:19:20.786 }, 00:19:20.786 "claimed": true, 00:19:20.786 "claim_type": "exclusive_write", 00:19:20.786 "zoned": false, 00:19:20.786 "supported_io_types": { 00:19:20.786 "read": true, 00:19:20.786 "write": true, 00:19:20.786 "unmap": true, 00:19:20.786 "flush": true, 00:19:20.786 "reset": true, 00:19:20.786 "nvme_admin": false, 00:19:20.786 "nvme_io": false, 00:19:20.786 "nvme_io_md": false, 00:19:20.786 "write_zeroes": true, 00:19:20.786 "zcopy": true, 00:19:20.786 "get_zone_info": false, 00:19:20.786 "zone_management": false, 00:19:20.786 "zone_append": false, 00:19:20.786 "compare": false, 00:19:20.786 "compare_and_write": false, 00:19:20.786 "abort": true, 00:19:20.786 "seek_hole": false, 00:19:20.786 "seek_data": false, 00:19:20.786 "copy": true, 00:19:20.786 "nvme_iov_md": false 00:19:20.786 }, 00:19:20.786 "memory_domains": [ 00:19:20.786 { 00:19:20.786 "dma_device_id": "system", 00:19:20.786 "dma_device_type": 1 00:19:20.786 }, 00:19:20.786 { 00:19:20.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.786 "dma_device_type": 2 00:19:20.786 } 00:19:20.786 ], 00:19:20.787 "driver_specific": {} 00:19:20.787 }' 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:20.787 09:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:21.045 [2024-07-15 09:45:48.994336] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.045 [2024-07-15 09:45:48.994371] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.045 [2024-07-15 09:45:48.994385] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.045 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.304 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.304 "name": "Existed_Raid", 00:19:21.304 "uuid": "02f6ce55-428f-11ef-a0af-c98d8ee52a94", 00:19:21.304 "strip_size_kb": 64, 00:19:21.304 "state": "offline", 00:19:21.304 "raid_level": "concat", 00:19:21.304 "superblock": false, 00:19:21.304 "num_base_bdevs": 2, 00:19:21.304 "num_base_bdevs_discovered": 1, 00:19:21.304 "num_base_bdevs_operational": 1, 00:19:21.304 "base_bdevs_list": [ 00:19:21.304 { 00:19:21.304 "name": null, 00:19:21.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.304 "is_configured": false, 00:19:21.304 "data_offset": 0, 00:19:21.304 "data_size": 65536 00:19:21.304 }, 00:19:21.304 { 00:19:21.304 "name": "BaseBdev2", 00:19:21.304 "uuid": "02f6c6b4-428f-11ef-a0af-c98d8ee52a94", 00:19:21.304 "is_configured": true, 00:19:21.304 "data_offset": 0, 00:19:21.304 "data_size": 65536 00:19:21.304 } 00:19:21.304 ] 00:19:21.304 }' 00:19:21.304 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.304 09:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.563 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:21.563 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:21.563 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:21.563 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.823 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:21.823 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.823 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:22.124 [2024-07-15 09:45:49.971701] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.124 [2024-07-15 09:45:49.971747] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a5d8e034a00 name Existed_Raid, state offline 00:19:22.124 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:22.124 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:22.124 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:22.124 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49716 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49716 ']' 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49716 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49716 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49716' 00:19:22.393 killing process with pid 49716 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49716 00:19:22.393 [2024-07-15 09:45:50.252635] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.393 [2024-07-15 09:45:50.252693] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.393 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49716 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:22.651 00:19:22.651 real 0m7.879s 00:19:22.651 user 0m13.029s 00:19:22.651 sys 0m1.951s 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.651 ************************************ 00:19:22.651 END TEST raid_state_function_test 00:19:22.651 ************************************ 00:19:22.651 09:45:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:22.651 09:45:50 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:19:22.651 09:45:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:22.651 09:45:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.651 09:45:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.651 ************************************ 00:19:22.651 START TEST raid_state_function_test_sb 00:19:22.651 ************************************ 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:19:22.651 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49983 00:19:22.652 Process raid pid: 49983 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49983' 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49983 /var/tmp/spdk-raid.sock 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 49983 ']' 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.652 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.652 [2024-07-15 09:45:50.588276] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:22.652 [2024-07-15 09:45:50.588542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:23.217 EAL: TSC is not safe to use in SMP mode 00:19:23.217 EAL: TSC is not invariant 00:19:23.475 [2024-07-15 09:45:51.316535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.475 [2024-07-15 09:45:51.431986] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:23.475 [2024-07-15 09:45:51.434428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.475 [2024-07-15 09:45:51.435130] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.475 [2024-07-15 09:45:51.435142] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.475 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.475 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:23.475 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:23.734 [2024-07-15 09:45:51.706158] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.734 [2024-07-15 09:45:51.706232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.734 [2024-07-15 09:45:51.706237] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.734 [2024-07-15 09:45:51.706244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.734 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.993 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.993 "name": "Existed_Raid", 00:19:23.993 "uuid": "05d03176-428f-11ef-a0af-c98d8ee52a94", 00:19:23.993 "strip_size_kb": 64, 00:19:23.993 "state": "configuring", 00:19:23.993 "raid_level": "concat", 00:19:23.993 "superblock": true, 00:19:23.993 "num_base_bdevs": 2, 00:19:23.993 "num_base_bdevs_discovered": 0, 00:19:23.993 "num_base_bdevs_operational": 2, 00:19:23.993 "base_bdevs_list": [ 00:19:23.993 { 00:19:23.993 "name": "BaseBdev1", 00:19:23.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.993 "is_configured": false, 00:19:23.993 "data_offset": 0, 00:19:23.993 "data_size": 0 00:19:23.993 }, 00:19:23.993 { 00:19:23.993 "name": "BaseBdev2", 00:19:23.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.993 "is_configured": false, 00:19:23.993 "data_offset": 0, 00:19:23.993 "data_size": 0 00:19:23.993 } 00:19:23.993 ] 00:19:23.993 }' 00:19:23.993 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.993 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.251 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:24.509 [2024-07-15 09:45:52.410198] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.510 [2024-07-15 09:45:52.410230] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf51bea34500 name Existed_Raid, state configuring 00:19:24.510 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:24.767 [2024-07-15 09:45:52.626288] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.767 [2024-07-15 09:45:52.626367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.767 [2024-07-15 09:45:52.626371] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.767 [2024-07-15 09:45:52.626379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.767 [2024-07-15 09:45:52.835460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.767 BaseBdev1 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:24.767 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.024 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.284 [ 00:19:25.284 { 00:19:25.284 "name": "BaseBdev1", 00:19:25.284 "aliases": [ 00:19:25.284 "067c55c7-428f-11ef-a0af-c98d8ee52a94" 00:19:25.284 ], 00:19:25.284 "product_name": "Malloc disk", 00:19:25.284 "block_size": 512, 00:19:25.284 "num_blocks": 65536, 00:19:25.284 "uuid": "067c55c7-428f-11ef-a0af-c98d8ee52a94", 00:19:25.284 "assigned_rate_limits": { 00:19:25.284 "rw_ios_per_sec": 0, 00:19:25.284 "rw_mbytes_per_sec": 0, 00:19:25.284 "r_mbytes_per_sec": 0, 00:19:25.284 "w_mbytes_per_sec": 0 00:19:25.284 }, 00:19:25.284 "claimed": true, 00:19:25.284 "claim_type": "exclusive_write", 00:19:25.284 "zoned": false, 00:19:25.284 "supported_io_types": { 00:19:25.284 "read": true, 00:19:25.284 "write": true, 00:19:25.284 "unmap": true, 00:19:25.284 "flush": true, 00:19:25.284 "reset": true, 00:19:25.284 "nvme_admin": false, 00:19:25.284 "nvme_io": false, 00:19:25.284 "nvme_io_md": false, 00:19:25.284 "write_zeroes": true, 00:19:25.284 "zcopy": true, 00:19:25.284 "get_zone_info": false, 00:19:25.284 "zone_management": false, 00:19:25.284 "zone_append": false, 00:19:25.284 "compare": false, 00:19:25.284 "compare_and_write": false, 00:19:25.284 "abort": true, 00:19:25.284 "seek_hole": false, 00:19:25.284 "seek_data": false, 00:19:25.284 "copy": true, 00:19:25.284 "nvme_iov_md": false 00:19:25.284 }, 00:19:25.284 "memory_domains": [ 00:19:25.284 { 00:19:25.284 "dma_device_id": "system", 00:19:25.284 "dma_device_type": 1 00:19:25.284 }, 00:19:25.284 { 00:19:25.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.284 "dma_device_type": 2 00:19:25.284 } 00:19:25.284 ], 00:19:25.285 "driver_specific": {} 00:19:25.285 } 00:19:25.285 ] 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.285 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.559 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.559 "name": "Existed_Raid", 00:19:25.559 "uuid": "065c9804-428f-11ef-a0af-c98d8ee52a94", 00:19:25.559 "strip_size_kb": 64, 00:19:25.559 "state": "configuring", 00:19:25.559 "raid_level": "concat", 00:19:25.559 "superblock": true, 00:19:25.559 "num_base_bdevs": 2, 00:19:25.559 "num_base_bdevs_discovered": 1, 00:19:25.559 "num_base_bdevs_operational": 2, 00:19:25.559 "base_bdevs_list": [ 00:19:25.559 { 00:19:25.559 "name": "BaseBdev1", 00:19:25.559 "uuid": "067c55c7-428f-11ef-a0af-c98d8ee52a94", 00:19:25.559 "is_configured": true, 00:19:25.559 "data_offset": 2048, 00:19:25.559 "data_size": 63488 00:19:25.559 }, 00:19:25.559 { 00:19:25.559 "name": "BaseBdev2", 00:19:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.559 "is_configured": false, 00:19:25.559 "data_offset": 0, 00:19:25.559 "data_size": 0 00:19:25.559 } 00:19:25.559 ] 00:19:25.559 }' 00:19:25.559 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.559 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:26.077 [2024-07-15 09:45:53.990516] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.077 [2024-07-15 09:45:53.990554] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf51bea34500 name Existed_Raid, state configuring 00:19:26.077 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:26.335 [2024-07-15 09:45:54.210552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.335 [2024-07-15 09:45:54.211492] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.335 [2024-07-15 09:45:54.211544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.335 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.594 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.594 "name": "Existed_Raid", 00:19:26.594 "uuid": "074e5559-428f-11ef-a0af-c98d8ee52a94", 00:19:26.594 "strip_size_kb": 64, 00:19:26.594 "state": "configuring", 00:19:26.594 "raid_level": "concat", 00:19:26.594 "superblock": true, 00:19:26.594 "num_base_bdevs": 2, 00:19:26.594 "num_base_bdevs_discovered": 1, 00:19:26.594 "num_base_bdevs_operational": 2, 00:19:26.594 "base_bdevs_list": [ 00:19:26.594 { 00:19:26.594 "name": "BaseBdev1", 00:19:26.594 "uuid": "067c55c7-428f-11ef-a0af-c98d8ee52a94", 00:19:26.594 "is_configured": true, 00:19:26.594 "data_offset": 2048, 00:19:26.594 "data_size": 63488 00:19:26.594 }, 00:19:26.594 { 00:19:26.594 "name": "BaseBdev2", 00:19:26.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.594 "is_configured": false, 00:19:26.594 "data_offset": 0, 00:19:26.594 "data_size": 0 00:19:26.594 } 00:19:26.594 ] 00:19:26.594 }' 00:19:26.594 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.594 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.852 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:27.111 [2024-07-15 09:45:54.966751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.111 [2024-07-15 09:45:54.966827] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf51bea34a00 00:19:27.111 [2024-07-15 09:45:54.966832] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:27.111 [2024-07-15 09:45:54.966851] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf51bea97e20 00:19:27.111 [2024-07-15 09:45:54.966887] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf51bea34a00 00:19:27.111 [2024-07-15 09:45:54.966890] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf51bea34a00 00:19:27.111 [2024-07-15 09:45:54.966908] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.111 BaseBdev2 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:27.111 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.111 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:27.369 [ 00:19:27.369 { 00:19:27.369 "name": "BaseBdev2", 00:19:27.369 "aliases": [ 00:19:27.369 "07c1b297-428f-11ef-a0af-c98d8ee52a94" 00:19:27.369 ], 00:19:27.369 "product_name": "Malloc disk", 00:19:27.369 "block_size": 512, 00:19:27.369 "num_blocks": 65536, 00:19:27.369 "uuid": "07c1b297-428f-11ef-a0af-c98d8ee52a94", 00:19:27.369 "assigned_rate_limits": { 00:19:27.369 "rw_ios_per_sec": 0, 00:19:27.369 "rw_mbytes_per_sec": 0, 00:19:27.369 "r_mbytes_per_sec": 0, 00:19:27.369 "w_mbytes_per_sec": 0 00:19:27.369 }, 00:19:27.369 "claimed": true, 00:19:27.369 "claim_type": "exclusive_write", 00:19:27.369 "zoned": false, 00:19:27.369 "supported_io_types": { 00:19:27.369 "read": true, 00:19:27.369 "write": true, 00:19:27.369 "unmap": true, 00:19:27.369 "flush": true, 00:19:27.369 "reset": true, 00:19:27.369 "nvme_admin": false, 00:19:27.369 "nvme_io": false, 00:19:27.369 "nvme_io_md": false, 00:19:27.369 "write_zeroes": true, 00:19:27.369 "zcopy": true, 00:19:27.369 "get_zone_info": false, 00:19:27.369 "zone_management": false, 00:19:27.369 "zone_append": false, 00:19:27.369 "compare": false, 00:19:27.369 "compare_and_write": false, 00:19:27.369 "abort": true, 00:19:27.369 "seek_hole": false, 00:19:27.369 "seek_data": false, 00:19:27.369 "copy": true, 00:19:27.370 "nvme_iov_md": false 00:19:27.370 }, 00:19:27.370 "memory_domains": [ 00:19:27.370 { 00:19:27.370 "dma_device_id": "system", 00:19:27.370 "dma_device_type": 1 00:19:27.370 }, 00:19:27.370 { 00:19:27.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.370 "dma_device_type": 2 00:19:27.370 } 00:19:27.370 ], 00:19:27.370 "driver_specific": {} 00:19:27.370 } 00:19:27.370 ] 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.370 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.628 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.628 "name": "Existed_Raid", 00:19:27.628 "uuid": "074e5559-428f-11ef-a0af-c98d8ee52a94", 00:19:27.628 "strip_size_kb": 64, 00:19:27.628 "state": "online", 00:19:27.628 "raid_level": "concat", 00:19:27.628 "superblock": true, 00:19:27.628 "num_base_bdevs": 2, 00:19:27.628 "num_base_bdevs_discovered": 2, 00:19:27.628 "num_base_bdevs_operational": 2, 00:19:27.628 "base_bdevs_list": [ 00:19:27.628 { 00:19:27.628 "name": "BaseBdev1", 00:19:27.628 "uuid": "067c55c7-428f-11ef-a0af-c98d8ee52a94", 00:19:27.628 "is_configured": true, 00:19:27.628 "data_offset": 2048, 00:19:27.628 "data_size": 63488 00:19:27.628 }, 00:19:27.628 { 00:19:27.628 "name": "BaseBdev2", 00:19:27.628 "uuid": "07c1b297-428f-11ef-a0af-c98d8ee52a94", 00:19:27.628 "is_configured": true, 00:19:27.628 "data_offset": 2048, 00:19:27.628 "data_size": 63488 00:19:27.628 } 00:19:27.628 ] 00:19:27.628 }' 00:19:27.628 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.628 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:27.887 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:28.146 [2024-07-15 09:45:56.170899] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.146 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:28.146 "name": "Existed_Raid", 00:19:28.146 "aliases": [ 00:19:28.146 "074e5559-428f-11ef-a0af-c98d8ee52a94" 00:19:28.146 ], 00:19:28.146 "product_name": "Raid Volume", 00:19:28.146 "block_size": 512, 00:19:28.146 "num_blocks": 126976, 00:19:28.146 "uuid": "074e5559-428f-11ef-a0af-c98d8ee52a94", 00:19:28.146 "assigned_rate_limits": { 00:19:28.146 "rw_ios_per_sec": 0, 00:19:28.146 "rw_mbytes_per_sec": 0, 00:19:28.146 "r_mbytes_per_sec": 0, 00:19:28.146 "w_mbytes_per_sec": 0 00:19:28.146 }, 00:19:28.146 "claimed": false, 00:19:28.146 "zoned": false, 00:19:28.146 "supported_io_types": { 00:19:28.146 "read": true, 00:19:28.146 "write": true, 00:19:28.146 "unmap": true, 00:19:28.146 "flush": true, 00:19:28.146 "reset": true, 00:19:28.146 "nvme_admin": false, 00:19:28.146 "nvme_io": false, 00:19:28.146 "nvme_io_md": false, 00:19:28.146 "write_zeroes": true, 00:19:28.146 "zcopy": false, 00:19:28.146 "get_zone_info": false, 00:19:28.146 "zone_management": false, 00:19:28.146 "zone_append": false, 00:19:28.146 "compare": false, 00:19:28.146 "compare_and_write": false, 00:19:28.146 "abort": false, 00:19:28.146 "seek_hole": false, 00:19:28.146 "seek_data": false, 00:19:28.146 "copy": false, 00:19:28.146 "nvme_iov_md": false 00:19:28.146 }, 00:19:28.146 "memory_domains": [ 00:19:28.146 { 00:19:28.146 "dma_device_id": "system", 00:19:28.146 "dma_device_type": 1 00:19:28.146 }, 00:19:28.146 { 00:19:28.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.146 "dma_device_type": 2 00:19:28.146 }, 00:19:28.146 { 00:19:28.146 "dma_device_id": "system", 00:19:28.146 "dma_device_type": 1 00:19:28.146 }, 00:19:28.146 { 00:19:28.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.146 "dma_device_type": 2 00:19:28.146 } 00:19:28.146 ], 00:19:28.146 "driver_specific": { 00:19:28.146 "raid": { 00:19:28.146 "uuid": "074e5559-428f-11ef-a0af-c98d8ee52a94", 00:19:28.146 "strip_size_kb": 64, 00:19:28.146 "state": "online", 00:19:28.146 "raid_level": "concat", 00:19:28.146 "superblock": true, 00:19:28.146 "num_base_bdevs": 2, 00:19:28.146 "num_base_bdevs_discovered": 2, 00:19:28.146 "num_base_bdevs_operational": 2, 00:19:28.146 "base_bdevs_list": [ 00:19:28.146 { 00:19:28.146 "name": "BaseBdev1", 00:19:28.146 "uuid": "067c55c7-428f-11ef-a0af-c98d8ee52a94", 00:19:28.146 "is_configured": true, 00:19:28.146 "data_offset": 2048, 00:19:28.146 "data_size": 63488 00:19:28.146 }, 00:19:28.146 { 00:19:28.146 "name": "BaseBdev2", 00:19:28.146 "uuid": "07c1b297-428f-11ef-a0af-c98d8ee52a94", 00:19:28.146 "is_configured": true, 00:19:28.146 "data_offset": 2048, 00:19:28.146 "data_size": 63488 00:19:28.146 } 00:19:28.146 ] 00:19:28.146 } 00:19:28.146 } 00:19:28.146 }' 00:19:28.146 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.146 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:28.146 BaseBdev2' 00:19:28.146 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:28.146 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:28.146 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:28.405 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:28.406 "name": "BaseBdev1", 00:19:28.406 "aliases": [ 00:19:28.406 "067c55c7-428f-11ef-a0af-c98d8ee52a94" 00:19:28.406 ], 00:19:28.406 "product_name": "Malloc disk", 00:19:28.406 "block_size": 512, 00:19:28.406 "num_blocks": 65536, 00:19:28.406 "uuid": "067c55c7-428f-11ef-a0af-c98d8ee52a94", 00:19:28.406 "assigned_rate_limits": { 00:19:28.406 "rw_ios_per_sec": 0, 00:19:28.406 "rw_mbytes_per_sec": 0, 00:19:28.406 "r_mbytes_per_sec": 0, 00:19:28.406 "w_mbytes_per_sec": 0 00:19:28.406 }, 00:19:28.406 "claimed": true, 00:19:28.406 "claim_type": "exclusive_write", 00:19:28.406 "zoned": false, 00:19:28.406 "supported_io_types": { 00:19:28.406 "read": true, 00:19:28.406 "write": true, 00:19:28.406 "unmap": true, 00:19:28.406 "flush": true, 00:19:28.406 "reset": true, 00:19:28.406 "nvme_admin": false, 00:19:28.406 "nvme_io": false, 00:19:28.406 "nvme_io_md": false, 00:19:28.406 "write_zeroes": true, 00:19:28.406 "zcopy": true, 00:19:28.406 "get_zone_info": false, 00:19:28.406 "zone_management": false, 00:19:28.406 "zone_append": false, 00:19:28.406 "compare": false, 00:19:28.406 "compare_and_write": false, 00:19:28.406 "abort": true, 00:19:28.406 "seek_hole": false, 00:19:28.406 "seek_data": false, 00:19:28.406 "copy": true, 00:19:28.406 "nvme_iov_md": false 00:19:28.406 }, 00:19:28.406 "memory_domains": [ 00:19:28.406 { 00:19:28.406 "dma_device_id": "system", 00:19:28.406 "dma_device_type": 1 00:19:28.406 }, 00:19:28.406 { 00:19:28.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.406 "dma_device_type": 2 00:19:28.406 } 00:19:28.406 ], 00:19:28.406 "driver_specific": {} 00:19:28.406 }' 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.406 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:28.665 "name": "BaseBdev2", 00:19:28.665 "aliases": [ 00:19:28.665 "07c1b297-428f-11ef-a0af-c98d8ee52a94" 00:19:28.665 ], 00:19:28.665 "product_name": "Malloc disk", 00:19:28.665 "block_size": 512, 00:19:28.665 "num_blocks": 65536, 00:19:28.665 "uuid": "07c1b297-428f-11ef-a0af-c98d8ee52a94", 00:19:28.665 "assigned_rate_limits": { 00:19:28.665 "rw_ios_per_sec": 0, 00:19:28.665 "rw_mbytes_per_sec": 0, 00:19:28.665 "r_mbytes_per_sec": 0, 00:19:28.665 "w_mbytes_per_sec": 0 00:19:28.665 }, 00:19:28.665 "claimed": true, 00:19:28.665 "claim_type": "exclusive_write", 00:19:28.665 "zoned": false, 00:19:28.665 "supported_io_types": { 00:19:28.665 "read": true, 00:19:28.665 "write": true, 00:19:28.665 "unmap": true, 00:19:28.665 "flush": true, 00:19:28.665 "reset": true, 00:19:28.665 "nvme_admin": false, 00:19:28.665 "nvme_io": false, 00:19:28.665 "nvme_io_md": false, 00:19:28.665 "write_zeroes": true, 00:19:28.665 "zcopy": true, 00:19:28.665 "get_zone_info": false, 00:19:28.665 "zone_management": false, 00:19:28.665 "zone_append": false, 00:19:28.665 "compare": false, 00:19:28.665 "compare_and_write": false, 00:19:28.665 "abort": true, 00:19:28.665 "seek_hole": false, 00:19:28.665 "seek_data": false, 00:19:28.665 "copy": true, 00:19:28.665 "nvme_iov_md": false 00:19:28.665 }, 00:19:28.665 "memory_domains": [ 00:19:28.665 { 00:19:28.665 "dma_device_id": "system", 00:19:28.665 "dma_device_type": 1 00:19:28.665 }, 00:19:28.665 { 00:19:28.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.665 "dma_device_type": 2 00:19:28.665 } 00:19:28.665 ], 00:19:28.665 "driver_specific": {} 00:19:28.665 }' 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.665 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.932 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:29.203 [2024-07-15 09:45:57.027065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.203 [2024-07-15 09:45:57.027116] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.203 [2024-07-15 09:45:57.027131] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.203 "name": "Existed_Raid", 00:19:29.203 "uuid": "074e5559-428f-11ef-a0af-c98d8ee52a94", 00:19:29.203 "strip_size_kb": 64, 00:19:29.203 "state": "offline", 00:19:29.203 "raid_level": "concat", 00:19:29.203 "superblock": true, 00:19:29.203 "num_base_bdevs": 2, 00:19:29.203 "num_base_bdevs_discovered": 1, 00:19:29.203 "num_base_bdevs_operational": 1, 00:19:29.203 "base_bdevs_list": [ 00:19:29.203 { 00:19:29.203 "name": null, 00:19:29.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.203 "is_configured": false, 00:19:29.203 "data_offset": 2048, 00:19:29.203 "data_size": 63488 00:19:29.203 }, 00:19:29.203 { 00:19:29.203 "name": "BaseBdev2", 00:19:29.203 "uuid": "07c1b297-428f-11ef-a0af-c98d8ee52a94", 00:19:29.203 "is_configured": true, 00:19:29.203 "data_offset": 2048, 00:19:29.203 "data_size": 63488 00:19:29.203 } 00:19:29.203 ] 00:19:29.203 }' 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.203 09:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.788 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:30.045 [2024-07-15 09:45:58.012328] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:30.045 [2024-07-15 09:45:58.012376] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf51bea34a00 name Existed_Raid, state offline 00:19:30.045 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:30.045 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:30.045 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.045 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49983 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 49983 ']' 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 49983 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 49983 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:30.303 killing process with pid 49983 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49983' 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 49983 00:19:30.303 [2024-07-15 09:45:58.275359] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.303 [2024-07-15 09:45:58.275402] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.303 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 49983 00:19:30.562 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:30.562 ************************************ 00:19:30.562 END TEST raid_state_function_test_sb 00:19:30.562 ************************************ 00:19:30.562 00:19:30.562 real 0m7.961s 00:19:30.562 user 0m13.243s 00:19:30.562 sys 0m1.909s 00:19:30.562 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.562 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.562 09:45:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:30.562 09:45:58 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:19:30.562 09:45:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:30.562 09:45:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.562 09:45:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.562 ************************************ 00:19:30.562 START TEST raid_superblock_test 00:19:30.562 ************************************ 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50253 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50253 /var/tmp/spdk-raid.sock 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50253 ']' 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.562 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.562 [2024-07-15 09:45:58.602262] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:30.562 [2024-07-15 09:45:58.602656] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:31.497 EAL: TSC is not safe to use in SMP mode 00:19:31.497 EAL: TSC is not invariant 00:19:31.497 [2024-07-15 09:45:59.337592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.497 [2024-07-15 09:45:59.455104] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:31.497 [2024-07-15 09:45:59.457655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.497 [2024-07-15 09:45:59.458408] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.497 [2024-07-15 09:45:59.458421] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:31.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:31.755 malloc1 00:19:31.755 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.014 [2024-07-15 09:45:59.949546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.014 [2024-07-15 09:45:59.949619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.014 [2024-07-15 09:45:59.949630] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2474f1634780 00:19:32.014 [2024-07-15 09:45:59.949637] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.014 [2024-07-15 09:45:59.950656] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.014 [2024-07-15 09:45:59.950691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.014 pt1 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:32.014 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:32.272 malloc2 00:19:32.272 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:32.530 [2024-07-15 09:46:00.449618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:32.530 [2024-07-15 09:46:00.449689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.530 [2024-07-15 09:46:00.449699] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2474f1634c80 00:19:32.530 [2024-07-15 09:46:00.449706] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.530 [2024-07-15 09:46:00.450403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.530 [2024-07-15 09:46:00.450435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:32.530 pt2 00:19:32.530 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:32.530 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:32.530 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:19:32.789 [2024-07-15 09:46:00.645660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.789 [2024-07-15 09:46:00.646283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.789 [2024-07-15 09:46:00.646335] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2474f1634f00 00:19:32.789 [2024-07-15 09:46:00.646339] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:32.789 [2024-07-15 09:46:00.646370] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2474f1697e20 00:19:32.789 [2024-07-15 09:46:00.646441] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2474f1634f00 00:19:32.789 [2024-07-15 09:46:00.646445] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2474f1634f00 00:19:32.789 [2024-07-15 09:46:00.646470] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.789 "name": "raid_bdev1", 00:19:32.789 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:32.789 "strip_size_kb": 64, 00:19:32.789 "state": "online", 00:19:32.789 "raid_level": "concat", 00:19:32.789 "superblock": true, 00:19:32.789 "num_base_bdevs": 2, 00:19:32.789 "num_base_bdevs_discovered": 2, 00:19:32.789 "num_base_bdevs_operational": 2, 00:19:32.789 "base_bdevs_list": [ 00:19:32.789 { 00:19:32.789 "name": "pt1", 00:19:32.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:32.789 "is_configured": true, 00:19:32.789 "data_offset": 2048, 00:19:32.789 "data_size": 63488 00:19:32.789 }, 00:19:32.789 { 00:19:32.789 "name": "pt2", 00:19:32.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.789 "is_configured": true, 00:19:32.789 "data_offset": 2048, 00:19:32.789 "data_size": 63488 00:19:32.789 } 00:19:32.789 ] 00:19:32.789 }' 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.789 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:33.091 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:33.350 [2024-07-15 09:46:01.345782] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.350 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:33.350 "name": "raid_bdev1", 00:19:33.350 "aliases": [ 00:19:33.350 "0b2440bc-428f-11ef-a0af-c98d8ee52a94" 00:19:33.350 ], 00:19:33.350 "product_name": "Raid Volume", 00:19:33.350 "block_size": 512, 00:19:33.350 "num_blocks": 126976, 00:19:33.350 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:33.350 "assigned_rate_limits": { 00:19:33.350 "rw_ios_per_sec": 0, 00:19:33.350 "rw_mbytes_per_sec": 0, 00:19:33.350 "r_mbytes_per_sec": 0, 00:19:33.350 "w_mbytes_per_sec": 0 00:19:33.350 }, 00:19:33.350 "claimed": false, 00:19:33.350 "zoned": false, 00:19:33.350 "supported_io_types": { 00:19:33.350 "read": true, 00:19:33.350 "write": true, 00:19:33.350 "unmap": true, 00:19:33.350 "flush": true, 00:19:33.350 "reset": true, 00:19:33.350 "nvme_admin": false, 00:19:33.350 "nvme_io": false, 00:19:33.350 "nvme_io_md": false, 00:19:33.350 "write_zeroes": true, 00:19:33.350 "zcopy": false, 00:19:33.350 "get_zone_info": false, 00:19:33.350 "zone_management": false, 00:19:33.350 "zone_append": false, 00:19:33.350 "compare": false, 00:19:33.350 "compare_and_write": false, 00:19:33.350 "abort": false, 00:19:33.350 "seek_hole": false, 00:19:33.350 "seek_data": false, 00:19:33.350 "copy": false, 00:19:33.350 "nvme_iov_md": false 00:19:33.350 }, 00:19:33.350 "memory_domains": [ 00:19:33.350 { 00:19:33.350 "dma_device_id": "system", 00:19:33.350 "dma_device_type": 1 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.350 "dma_device_type": 2 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "dma_device_id": "system", 00:19:33.350 "dma_device_type": 1 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.350 "dma_device_type": 2 00:19:33.350 } 00:19:33.350 ], 00:19:33.350 "driver_specific": { 00:19:33.350 "raid": { 00:19:33.350 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:33.350 "strip_size_kb": 64, 00:19:33.350 "state": "online", 00:19:33.350 "raid_level": "concat", 00:19:33.350 "superblock": true, 00:19:33.350 "num_base_bdevs": 2, 00:19:33.350 "num_base_bdevs_discovered": 2, 00:19:33.350 "num_base_bdevs_operational": 2, 00:19:33.350 "base_bdevs_list": [ 00:19:33.350 { 00:19:33.350 "name": "pt1", 00:19:33.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.350 "is_configured": true, 00:19:33.350 "data_offset": 2048, 00:19:33.350 "data_size": 63488 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "name": "pt2", 00:19:33.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.350 "is_configured": true, 00:19:33.350 "data_offset": 2048, 00:19:33.350 "data_size": 63488 00:19:33.350 } 00:19:33.350 ] 00:19:33.350 } 00:19:33.350 } 00:19:33.350 }' 00:19:33.350 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:33.350 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:33.350 pt2' 00:19:33.350 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.350 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:33.350 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.608 "name": "pt1", 00:19:33.608 "aliases": [ 00:19:33.608 "00000000-0000-0000-0000-000000000001" 00:19:33.608 ], 00:19:33.608 "product_name": "passthru", 00:19:33.608 "block_size": 512, 00:19:33.608 "num_blocks": 65536, 00:19:33.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.608 "assigned_rate_limits": { 00:19:33.608 "rw_ios_per_sec": 0, 00:19:33.608 "rw_mbytes_per_sec": 0, 00:19:33.608 "r_mbytes_per_sec": 0, 00:19:33.608 "w_mbytes_per_sec": 0 00:19:33.608 }, 00:19:33.608 "claimed": true, 00:19:33.608 "claim_type": "exclusive_write", 00:19:33.608 "zoned": false, 00:19:33.608 "supported_io_types": { 00:19:33.608 "read": true, 00:19:33.608 "write": true, 00:19:33.608 "unmap": true, 00:19:33.608 "flush": true, 00:19:33.608 "reset": true, 00:19:33.608 "nvme_admin": false, 00:19:33.608 "nvme_io": false, 00:19:33.608 "nvme_io_md": false, 00:19:33.608 "write_zeroes": true, 00:19:33.608 "zcopy": true, 00:19:33.608 "get_zone_info": false, 00:19:33.608 "zone_management": false, 00:19:33.608 "zone_append": false, 00:19:33.608 "compare": false, 00:19:33.608 "compare_and_write": false, 00:19:33.608 "abort": true, 00:19:33.608 "seek_hole": false, 00:19:33.608 "seek_data": false, 00:19:33.608 "copy": true, 00:19:33.608 "nvme_iov_md": false 00:19:33.608 }, 00:19:33.608 "memory_domains": [ 00:19:33.608 { 00:19:33.608 "dma_device_id": "system", 00:19:33.608 "dma_device_type": 1 00:19:33.608 }, 00:19:33.608 { 00:19:33.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.608 "dma_device_type": 2 00:19:33.608 } 00:19:33.608 ], 00:19:33.608 "driver_specific": { 00:19:33.608 "passthru": { 00:19:33.608 "name": "pt1", 00:19:33.608 "base_bdev_name": "malloc1" 00:19:33.608 } 00:19:33.608 } 00:19:33.608 }' 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:33.608 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.895 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.895 "name": "pt2", 00:19:33.895 "aliases": [ 00:19:33.895 "00000000-0000-0000-0000-000000000002" 00:19:33.895 ], 00:19:33.895 "product_name": "passthru", 00:19:33.895 "block_size": 512, 00:19:33.895 "num_blocks": 65536, 00:19:33.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.895 "assigned_rate_limits": { 00:19:33.895 "rw_ios_per_sec": 0, 00:19:33.895 "rw_mbytes_per_sec": 0, 00:19:33.895 "r_mbytes_per_sec": 0, 00:19:33.895 "w_mbytes_per_sec": 0 00:19:33.895 }, 00:19:33.895 "claimed": true, 00:19:33.895 "claim_type": "exclusive_write", 00:19:33.895 "zoned": false, 00:19:33.895 "supported_io_types": { 00:19:33.895 "read": true, 00:19:33.895 "write": true, 00:19:33.895 "unmap": true, 00:19:33.895 "flush": true, 00:19:33.895 "reset": true, 00:19:33.895 "nvme_admin": false, 00:19:33.895 "nvme_io": false, 00:19:33.895 "nvme_io_md": false, 00:19:33.895 "write_zeroes": true, 00:19:33.895 "zcopy": true, 00:19:33.895 "get_zone_info": false, 00:19:33.895 "zone_management": false, 00:19:33.895 "zone_append": false, 00:19:33.895 "compare": false, 00:19:33.895 "compare_and_write": false, 00:19:33.895 "abort": true, 00:19:33.895 "seek_hole": false, 00:19:33.895 "seek_data": false, 00:19:33.895 "copy": true, 00:19:33.895 "nvme_iov_md": false 00:19:33.895 }, 00:19:33.895 "memory_domains": [ 00:19:33.895 { 00:19:33.895 "dma_device_id": "system", 00:19:33.895 "dma_device_type": 1 00:19:33.896 }, 00:19:33.896 { 00:19:33.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.896 "dma_device_type": 2 00:19:33.896 } 00:19:33.896 ], 00:19:33.896 "driver_specific": { 00:19:33.896 "passthru": { 00:19:33.896 "name": "pt2", 00:19:33.896 "base_bdev_name": "malloc2" 00:19:33.896 } 00:19:33.896 } 00:19:33.896 }' 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:33.896 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:34.154 [2024-07-15 09:46:02.133896] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.154 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0b2440bc-428f-11ef-a0af-c98d8ee52a94 00:19:34.154 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0b2440bc-428f-11ef-a0af-c98d8ee52a94 ']' 00:19:34.154 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:34.412 [2024-07-15 09:46:02.349908] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.412 [2024-07-15 09:46:02.349933] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.412 [2024-07-15 09:46:02.349948] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.412 [2024-07-15 09:46:02.349958] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.412 [2024-07-15 09:46:02.349962] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2474f1634f00 name raid_bdev1, state offline 00:19:34.412 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.412 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:34.671 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:34.671 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:34.671 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.671 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:34.671 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.671 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:34.929 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:34.929 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:35.188 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:19:35.446 [2024-07-15 09:46:03.342075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:35.446 [2024-07-15 09:46:03.342747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:35.446 [2024-07-15 09:46:03.342772] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:35.447 [2024-07-15 09:46:03.342811] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:35.447 [2024-07-15 09:46:03.342819] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.447 [2024-07-15 09:46:03.342823] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2474f1634c80 name raid_bdev1, state configuring 00:19:35.447 request: 00:19:35.447 { 00:19:35.447 "name": "raid_bdev1", 00:19:35.447 "raid_level": "concat", 00:19:35.447 "base_bdevs": [ 00:19:35.447 "malloc1", 00:19:35.447 "malloc2" 00:19:35.447 ], 00:19:35.447 "strip_size_kb": 64, 00:19:35.447 "superblock": false, 00:19:35.447 "method": "bdev_raid_create", 00:19:35.447 "req_id": 1 00:19:35.447 } 00:19:35.447 Got JSON-RPC error response 00:19:35.447 response: 00:19:35.447 { 00:19:35.447 "code": -17, 00:19:35.447 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:35.447 } 00:19:35.447 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:35.447 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:35.447 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:35.447 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:35.447 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.447 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.706 [2024-07-15 09:46:03.754135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.706 [2024-07-15 09:46:03.754188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.706 [2024-07-15 09:46:03.754197] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2474f1634780 00:19:35.706 [2024-07-15 09:46:03.754204] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.706 [2024-07-15 09:46:03.754966] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.706 [2024-07-15 09:46:03.754999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.706 [2024-07-15 09:46:03.755019] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:35.706 [2024-07-15 09:46:03.755031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:35.706 pt1 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.706 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.964 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.965 "name": "raid_bdev1", 00:19:35.965 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:35.965 "strip_size_kb": 64, 00:19:35.965 "state": "configuring", 00:19:35.965 "raid_level": "concat", 00:19:35.965 "superblock": true, 00:19:35.965 "num_base_bdevs": 2, 00:19:35.965 "num_base_bdevs_discovered": 1, 00:19:35.965 "num_base_bdevs_operational": 2, 00:19:35.965 "base_bdevs_list": [ 00:19:35.965 { 00:19:35.965 "name": "pt1", 00:19:35.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.965 "is_configured": true, 00:19:35.965 "data_offset": 2048, 00:19:35.965 "data_size": 63488 00:19:35.965 }, 00:19:35.965 { 00:19:35.965 "name": null, 00:19:35.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.965 "is_configured": false, 00:19:35.965 "data_offset": 2048, 00:19:35.965 "data_size": 63488 00:19:35.965 } 00:19:35.965 ] 00:19:35.965 }' 00:19:35.965 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.965 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.223 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:19:36.223 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:36.223 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:36.223 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.482 [2024-07-15 09:46:04.462250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.482 [2024-07-15 09:46:04.462305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.482 [2024-07-15 09:46:04.462315] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2474f1634f00 00:19:36.482 [2024-07-15 09:46:04.462321] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.482 [2024-07-15 09:46:04.462419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.482 [2024-07-15 09:46:04.462427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.482 [2024-07-15 09:46:04.462442] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.482 [2024-07-15 09:46:04.462449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.482 [2024-07-15 09:46:04.462468] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2474f1635180 00:19:36.482 [2024-07-15 09:46:04.462472] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:36.482 [2024-07-15 09:46:04.462488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2474f1697e20 00:19:36.482 [2024-07-15 09:46:04.462528] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2474f1635180 00:19:36.482 [2024-07-15 09:46:04.462532] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2474f1635180 00:19:36.482 [2024-07-15 09:46:04.462548] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.482 pt2 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.482 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.741 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.741 "name": "raid_bdev1", 00:19:36.741 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:36.741 "strip_size_kb": 64, 00:19:36.741 "state": "online", 00:19:36.741 "raid_level": "concat", 00:19:36.741 "superblock": true, 00:19:36.741 "num_base_bdevs": 2, 00:19:36.741 "num_base_bdevs_discovered": 2, 00:19:36.741 "num_base_bdevs_operational": 2, 00:19:36.741 "base_bdevs_list": [ 00:19:36.741 { 00:19:36.741 "name": "pt1", 00:19:36.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.741 "is_configured": true, 00:19:36.741 "data_offset": 2048, 00:19:36.741 "data_size": 63488 00:19:36.741 }, 00:19:36.741 { 00:19:36.741 "name": "pt2", 00:19:36.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.741 "is_configured": true, 00:19:36.741 "data_offset": 2048, 00:19:36.741 "data_size": 63488 00:19:36.741 } 00:19:36.741 ] 00:19:36.741 }' 00:19:36.741 09:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.741 09:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.000 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:37.259 [2024-07-15 09:46:05.242395] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.259 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:37.259 "name": "raid_bdev1", 00:19:37.259 "aliases": [ 00:19:37.259 "0b2440bc-428f-11ef-a0af-c98d8ee52a94" 00:19:37.259 ], 00:19:37.259 "product_name": "Raid Volume", 00:19:37.259 "block_size": 512, 00:19:37.259 "num_blocks": 126976, 00:19:37.260 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:37.260 "assigned_rate_limits": { 00:19:37.260 "rw_ios_per_sec": 0, 00:19:37.260 "rw_mbytes_per_sec": 0, 00:19:37.260 "r_mbytes_per_sec": 0, 00:19:37.260 "w_mbytes_per_sec": 0 00:19:37.260 }, 00:19:37.260 "claimed": false, 00:19:37.260 "zoned": false, 00:19:37.260 "supported_io_types": { 00:19:37.260 "read": true, 00:19:37.260 "write": true, 00:19:37.260 "unmap": true, 00:19:37.260 "flush": true, 00:19:37.260 "reset": true, 00:19:37.260 "nvme_admin": false, 00:19:37.260 "nvme_io": false, 00:19:37.260 "nvme_io_md": false, 00:19:37.260 "write_zeroes": true, 00:19:37.260 "zcopy": false, 00:19:37.260 "get_zone_info": false, 00:19:37.260 "zone_management": false, 00:19:37.260 "zone_append": false, 00:19:37.260 "compare": false, 00:19:37.260 "compare_and_write": false, 00:19:37.260 "abort": false, 00:19:37.260 "seek_hole": false, 00:19:37.260 "seek_data": false, 00:19:37.260 "copy": false, 00:19:37.260 "nvme_iov_md": false 00:19:37.260 }, 00:19:37.260 "memory_domains": [ 00:19:37.260 { 00:19:37.260 "dma_device_id": "system", 00:19:37.260 "dma_device_type": 1 00:19:37.260 }, 00:19:37.260 { 00:19:37.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.260 "dma_device_type": 2 00:19:37.260 }, 00:19:37.260 { 00:19:37.260 "dma_device_id": "system", 00:19:37.260 "dma_device_type": 1 00:19:37.260 }, 00:19:37.260 { 00:19:37.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.260 "dma_device_type": 2 00:19:37.260 } 00:19:37.260 ], 00:19:37.260 "driver_specific": { 00:19:37.260 "raid": { 00:19:37.260 "uuid": "0b2440bc-428f-11ef-a0af-c98d8ee52a94", 00:19:37.260 "strip_size_kb": 64, 00:19:37.260 "state": "online", 00:19:37.260 "raid_level": "concat", 00:19:37.260 "superblock": true, 00:19:37.260 "num_base_bdevs": 2, 00:19:37.260 "num_base_bdevs_discovered": 2, 00:19:37.260 "num_base_bdevs_operational": 2, 00:19:37.260 "base_bdevs_list": [ 00:19:37.260 { 00:19:37.260 "name": "pt1", 00:19:37.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.260 "is_configured": true, 00:19:37.260 "data_offset": 2048, 00:19:37.260 "data_size": 63488 00:19:37.260 }, 00:19:37.260 { 00:19:37.260 "name": "pt2", 00:19:37.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.260 "is_configured": true, 00:19:37.260 "data_offset": 2048, 00:19:37.260 "data_size": 63488 00:19:37.260 } 00:19:37.260 ] 00:19:37.260 } 00:19:37.260 } 00:19:37.260 }' 00:19:37.260 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.260 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:37.260 pt2' 00:19:37.260 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.260 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:37.260 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.529 "name": "pt1", 00:19:37.529 "aliases": [ 00:19:37.529 "00000000-0000-0000-0000-000000000001" 00:19:37.529 ], 00:19:37.529 "product_name": "passthru", 00:19:37.529 "block_size": 512, 00:19:37.529 "num_blocks": 65536, 00:19:37.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.529 "assigned_rate_limits": { 00:19:37.529 "rw_ios_per_sec": 0, 00:19:37.529 "rw_mbytes_per_sec": 0, 00:19:37.529 "r_mbytes_per_sec": 0, 00:19:37.529 "w_mbytes_per_sec": 0 00:19:37.529 }, 00:19:37.529 "claimed": true, 00:19:37.529 "claim_type": "exclusive_write", 00:19:37.529 "zoned": false, 00:19:37.529 "supported_io_types": { 00:19:37.529 "read": true, 00:19:37.529 "write": true, 00:19:37.529 "unmap": true, 00:19:37.529 "flush": true, 00:19:37.529 "reset": true, 00:19:37.529 "nvme_admin": false, 00:19:37.529 "nvme_io": false, 00:19:37.529 "nvme_io_md": false, 00:19:37.529 "write_zeroes": true, 00:19:37.529 "zcopy": true, 00:19:37.529 "get_zone_info": false, 00:19:37.529 "zone_management": false, 00:19:37.529 "zone_append": false, 00:19:37.529 "compare": false, 00:19:37.529 "compare_and_write": false, 00:19:37.529 "abort": true, 00:19:37.529 "seek_hole": false, 00:19:37.529 "seek_data": false, 00:19:37.529 "copy": true, 00:19:37.529 "nvme_iov_md": false 00:19:37.529 }, 00:19:37.529 "memory_domains": [ 00:19:37.529 { 00:19:37.529 "dma_device_id": "system", 00:19:37.529 "dma_device_type": 1 00:19:37.529 }, 00:19:37.529 { 00:19:37.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.529 "dma_device_type": 2 00:19:37.529 } 00:19:37.529 ], 00:19:37.529 "driver_specific": { 00:19:37.529 "passthru": { 00:19:37.529 "name": "pt1", 00:19:37.529 "base_bdev_name": "malloc1" 00:19:37.529 } 00:19:37.529 } 00:19:37.529 }' 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.529 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.530 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.530 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.530 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.530 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:37.530 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.788 "name": "pt2", 00:19:37.788 "aliases": [ 00:19:37.788 "00000000-0000-0000-0000-000000000002" 00:19:37.788 ], 00:19:37.788 "product_name": "passthru", 00:19:37.788 "block_size": 512, 00:19:37.788 "num_blocks": 65536, 00:19:37.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.788 "assigned_rate_limits": { 00:19:37.788 "rw_ios_per_sec": 0, 00:19:37.788 "rw_mbytes_per_sec": 0, 00:19:37.788 "r_mbytes_per_sec": 0, 00:19:37.788 "w_mbytes_per_sec": 0 00:19:37.788 }, 00:19:37.788 "claimed": true, 00:19:37.788 "claim_type": "exclusive_write", 00:19:37.788 "zoned": false, 00:19:37.788 "supported_io_types": { 00:19:37.788 "read": true, 00:19:37.788 "write": true, 00:19:37.788 "unmap": true, 00:19:37.788 "flush": true, 00:19:37.788 "reset": true, 00:19:37.788 "nvme_admin": false, 00:19:37.788 "nvme_io": false, 00:19:37.788 "nvme_io_md": false, 00:19:37.788 "write_zeroes": true, 00:19:37.788 "zcopy": true, 00:19:37.788 "get_zone_info": false, 00:19:37.788 "zone_management": false, 00:19:37.788 "zone_append": false, 00:19:37.788 "compare": false, 00:19:37.788 "compare_and_write": false, 00:19:37.788 "abort": true, 00:19:37.788 "seek_hole": false, 00:19:37.788 "seek_data": false, 00:19:37.788 "copy": true, 00:19:37.788 "nvme_iov_md": false 00:19:37.788 }, 00:19:37.788 "memory_domains": [ 00:19:37.788 { 00:19:37.788 "dma_device_id": "system", 00:19:37.788 "dma_device_type": 1 00:19:37.788 }, 00:19:37.788 { 00:19:37.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.788 "dma_device_type": 2 00:19:37.788 } 00:19:37.788 ], 00:19:37.788 "driver_specific": { 00:19:37.788 "passthru": { 00:19:37.788 "name": "pt2", 00:19:37.788 "base_bdev_name": "malloc2" 00:19:37.788 } 00:19:37.788 } 00:19:37.788 }' 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.788 09:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:38.047 [2024-07-15 09:46:06.038504] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0b2440bc-428f-11ef-a0af-c98d8ee52a94 '!=' 0b2440bc-428f-11ef-a0af-c98d8ee52a94 ']' 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50253 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50253 ']' 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50253 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50253 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:38.047 killing process with pid 50253 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50253' 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50253 00:19:38.047 [2024-07-15 09:46:06.072666] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.047 [2024-07-15 09:46:06.072684] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.047 [2024-07-15 09:46:06.072706] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.047 [2024-07-15 09:46:06.072710] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2474f1635180 name raid_bdev1, state offline 00:19:38.047 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50253 00:19:38.047 [2024-07-15 09:46:06.090275] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.306 09:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:38.306 00:19:38.306 real 0m7.757s 00:19:38.306 user 0m12.867s 00:19:38.306 sys 0m1.863s 00:19:38.306 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:38.306 09:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.306 ************************************ 00:19:38.306 END TEST raid_superblock_test 00:19:38.306 ************************************ 00:19:38.306 09:46:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:38.306 09:46:06 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:19:38.306 09:46:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:38.306 09:46:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.306 09:46:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.306 ************************************ 00:19:38.306 START TEST raid_read_error_test 00:19:38.306 ************************************ 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:38.306 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:38.563 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WPbC7mhziA 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50514 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50514 /var/tmp/spdk-raid.sock 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50514 ']' 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.564 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.564 [2024-07-15 09:46:06.418327] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:38.564 [2024-07-15 09:46:06.418562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:39.129 EAL: TSC is not safe to use in SMP mode 00:19:39.129 EAL: TSC is not invariant 00:19:39.129 [2024-07-15 09:46:07.135117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.386 [2024-07-15 09:46:07.250338] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:39.386 [2024-07-15 09:46:07.252854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.386 [2024-07-15 09:46:07.253551] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.386 [2024-07-15 09:46:07.253563] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.386 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.386 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:39.386 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:39.386 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:39.644 BaseBdev1_malloc 00:19:39.644 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:39.903 true 00:19:39.903 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:40.161 [2024-07-15 09:46:08.060643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:40.161 [2024-07-15 09:46:08.060719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.161 [2024-07-15 09:46:08.060753] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac96234780 00:19:40.161 [2024-07-15 09:46:08.060760] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.161 [2024-07-15 09:46:08.061558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.161 [2024-07-15 09:46:08.061585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:40.161 BaseBdev1 00:19:40.161 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:40.161 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:40.474 BaseBdev2_malloc 00:19:40.474 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:40.474 true 00:19:40.474 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:40.731 [2024-07-15 09:46:08.696764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:40.731 [2024-07-15 09:46:08.696838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.731 [2024-07-15 09:46:08.696875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac96234c80 00:19:40.731 [2024-07-15 09:46:08.696882] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.731 [2024-07-15 09:46:08.697716] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.731 [2024-07-15 09:46:08.697748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:40.731 BaseBdev2 00:19:40.731 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:19:40.990 [2024-07-15 09:46:08.896809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.990 [2024-07-15 09:46:08.897533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.990 [2024-07-15 09:46:08.897603] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ac96234f00 00:19:40.990 [2024-07-15 09:46:08.897608] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:40.990 [2024-07-15 09:46:08.897644] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac962a0e20 00:19:40.990 [2024-07-15 09:46:08.897722] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ac96234f00 00:19:40.990 [2024-07-15 09:46:08.897725] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ac96234f00 00:19:40.990 [2024-07-15 09:46:08.897749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.990 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.337 09:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.337 "name": "raid_bdev1", 00:19:41.337 "uuid": "100f470f-428f-11ef-a0af-c98d8ee52a94", 00:19:41.337 "strip_size_kb": 64, 00:19:41.337 "state": "online", 00:19:41.337 "raid_level": "concat", 00:19:41.337 "superblock": true, 00:19:41.337 "num_base_bdevs": 2, 00:19:41.337 "num_base_bdevs_discovered": 2, 00:19:41.337 "num_base_bdevs_operational": 2, 00:19:41.337 "base_bdevs_list": [ 00:19:41.337 { 00:19:41.337 "name": "BaseBdev1", 00:19:41.337 "uuid": "be606073-e485-f756-a0c3-70e3fb655038", 00:19:41.337 "is_configured": true, 00:19:41.337 "data_offset": 2048, 00:19:41.337 "data_size": 63488 00:19:41.337 }, 00:19:41.337 { 00:19:41.337 "name": "BaseBdev2", 00:19:41.337 "uuid": "346ced26-7a05-c954-b319-58f697a72e24", 00:19:41.337 "is_configured": true, 00:19:41.337 "data_offset": 2048, 00:19:41.337 "data_size": 63488 00:19:41.337 } 00:19:41.337 ] 00:19:41.337 }' 00:19:41.337 09:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.337 09:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.608 09:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:41.608 09:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:41.608 [2024-07-15 09:46:09.508990] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac962a0ec0 00:19:42.542 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.800 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.059 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.059 "name": "raid_bdev1", 00:19:43.059 "uuid": "100f470f-428f-11ef-a0af-c98d8ee52a94", 00:19:43.059 "strip_size_kb": 64, 00:19:43.059 "state": "online", 00:19:43.059 "raid_level": "concat", 00:19:43.059 "superblock": true, 00:19:43.059 "num_base_bdevs": 2, 00:19:43.059 "num_base_bdevs_discovered": 2, 00:19:43.059 "num_base_bdevs_operational": 2, 00:19:43.059 "base_bdevs_list": [ 00:19:43.059 { 00:19:43.059 "name": "BaseBdev1", 00:19:43.059 "uuid": "be606073-e485-f756-a0c3-70e3fb655038", 00:19:43.059 "is_configured": true, 00:19:43.059 "data_offset": 2048, 00:19:43.059 "data_size": 63488 00:19:43.059 }, 00:19:43.059 { 00:19:43.059 "name": "BaseBdev2", 00:19:43.059 "uuid": "346ced26-7a05-c954-b319-58f697a72e24", 00:19:43.059 "is_configured": true, 00:19:43.059 "data_offset": 2048, 00:19:43.059 "data_size": 63488 00:19:43.059 } 00:19:43.059 ] 00:19:43.059 }' 00:19:43.059 09:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.059 09:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.318 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:43.577 [2024-07-15 09:46:11.504632] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.577 [2024-07-15 09:46:11.504671] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.577 [2024-07-15 09:46:11.505033] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.577 [2024-07-15 09:46:11.505049] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.577 [2024-07-15 09:46:11.505055] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.577 [2024-07-15 09:46:11.505060] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ac96234f00 name raid_bdev1, state offline 00:19:43.577 0 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50514 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50514 ']' 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50514 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50514 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:43.577 killing process with pid 50514 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50514' 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50514 00:19:43.577 [2024-07-15 09:46:11.550081] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:43.577 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50514 00:19:43.577 [2024-07-15 09:46:11.567778] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WPbC7mhziA 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:19:43.836 00:19:43.836 real 0m5.443s 00:19:43.836 user 0m7.779s 00:19:43.836 sys 0m1.323s 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.836 ************************************ 00:19:43.836 END TEST raid_read_error_test 00:19:43.836 ************************************ 00:19:43.836 09:46:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 09:46:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:43.836 09:46:11 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:19:43.836 09:46:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:43.836 09:46:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.836 09:46:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 ************************************ 00:19:43.836 START TEST raid_write_error_test 00:19:43.836 ************************************ 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.AqynFfYk2Y 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50638 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50638 /var/tmp/spdk-raid.sock 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50638 ']' 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.836 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 [2024-07-15 09:46:11.919599] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:43.836 [2024-07-15 09:46:11.919917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:44.769 EAL: TSC is not safe to use in SMP mode 00:19:44.769 EAL: TSC is not invariant 00:19:44.769 [2024-07-15 09:46:12.667457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.769 [2024-07-15 09:46:12.777516] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:44.769 [2024-07-15 09:46:12.780124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.769 [2024-07-15 09:46:12.780876] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.769 [2024-07-15 09:46:12.780889] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.025 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.025 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:45.025 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:45.025 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:45.299 BaseBdev1_malloc 00:19:45.299 09:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:45.299 true 00:19:45.299 09:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:45.558 [2024-07-15 09:46:13.524022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:45.558 [2024-07-15 09:46:13.524106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.558 [2024-07-15 09:46:13.524141] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb43e634780 00:19:45.558 [2024-07-15 09:46:13.524149] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.558 [2024-07-15 09:46:13.524973] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.558 [2024-07-15 09:46:13.525002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.558 BaseBdev1 00:19:45.558 09:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:45.558 09:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.816 BaseBdev2_malloc 00:19:45.816 09:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:46.075 true 00:19:46.076 09:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:46.076 [2024-07-15 09:46:14.132077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:46.076 [2024-07-15 09:46:14.132176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.076 [2024-07-15 09:46:14.132230] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb43e634c80 00:19:46.076 [2024-07-15 09:46:14.132249] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.076 [2024-07-15 09:46:14.133286] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.076 [2024-07-15 09:46:14.133327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:46.076 BaseBdev2 00:19:46.076 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:19:46.335 [2024-07-15 09:46:14.344078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.335 [2024-07-15 09:46:14.344791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.335 [2024-07-15 09:46:14.344855] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbb43e634f00 00:19:46.335 [2024-07-15 09:46:14.344860] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:46.335 [2024-07-15 09:46:14.344895] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbb43e6a0e20 00:19:46.335 [2024-07-15 09:46:14.344969] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbb43e634f00 00:19:46.335 [2024-07-15 09:46:14.344972] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbb43e634f00 00:19:46.335 [2024-07-15 09:46:14.344994] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.335 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.593 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.593 "name": "raid_bdev1", 00:19:46.593 "uuid": "134e773b-428f-11ef-a0af-c98d8ee52a94", 00:19:46.593 "strip_size_kb": 64, 00:19:46.593 "state": "online", 00:19:46.593 "raid_level": "concat", 00:19:46.593 "superblock": true, 00:19:46.593 "num_base_bdevs": 2, 00:19:46.593 "num_base_bdevs_discovered": 2, 00:19:46.593 "num_base_bdevs_operational": 2, 00:19:46.593 "base_bdevs_list": [ 00:19:46.593 { 00:19:46.593 "name": "BaseBdev1", 00:19:46.593 "uuid": "86795608-46ce-3059-a3f5-86c1e20902cf", 00:19:46.593 "is_configured": true, 00:19:46.593 "data_offset": 2048, 00:19:46.593 "data_size": 63488 00:19:46.593 }, 00:19:46.593 { 00:19:46.593 "name": "BaseBdev2", 00:19:46.593 "uuid": "ec5e3f31-b72b-115a-9cea-54dcc4dbc94a", 00:19:46.593 "is_configured": true, 00:19:46.593 "data_offset": 2048, 00:19:46.594 "data_size": 63488 00:19:46.594 } 00:19:46.594 ] 00:19:46.594 }' 00:19:46.594 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.594 09:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:46.851 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:47.109 [2024-07-15 09:46:14.952244] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbb43e6a0ec0 00:19:48.043 09:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.043 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.303 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.303 "name": "raid_bdev1", 00:19:48.303 "uuid": "134e773b-428f-11ef-a0af-c98d8ee52a94", 00:19:48.303 "strip_size_kb": 64, 00:19:48.303 "state": "online", 00:19:48.303 "raid_level": "concat", 00:19:48.303 "superblock": true, 00:19:48.303 "num_base_bdevs": 2, 00:19:48.303 "num_base_bdevs_discovered": 2, 00:19:48.303 "num_base_bdevs_operational": 2, 00:19:48.303 "base_bdevs_list": [ 00:19:48.303 { 00:19:48.303 "name": "BaseBdev1", 00:19:48.303 "uuid": "86795608-46ce-3059-a3f5-86c1e20902cf", 00:19:48.303 "is_configured": true, 00:19:48.303 "data_offset": 2048, 00:19:48.303 "data_size": 63488 00:19:48.303 }, 00:19:48.303 { 00:19:48.303 "name": "BaseBdev2", 00:19:48.303 "uuid": "ec5e3f31-b72b-115a-9cea-54dcc4dbc94a", 00:19:48.303 "is_configured": true, 00:19:48.303 "data_offset": 2048, 00:19:48.303 "data_size": 63488 00:19:48.303 } 00:19:48.303 ] 00:19:48.303 }' 00:19:48.303 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.303 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.562 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:48.820 [2024-07-15 09:46:16.838696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.820 [2024-07-15 09:46:16.838732] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.820 [2024-07-15 09:46:16.839055] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.820 [2024-07-15 09:46:16.839063] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.820 [2024-07-15 09:46:16.839069] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.820 [2024-07-15 09:46:16.839074] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbb43e634f00 name raid_bdev1, state offline 00:19:48.820 0 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50638 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50638 ']' 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50638 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50638 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:48.820 killing process with pid 50638 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50638' 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50638 00:19:48.820 [2024-07-15 09:46:16.871867] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.820 09:46:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50638 00:19:48.820 [2024-07-15 09:46:16.889155] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.AqynFfYk2Y 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:49.078 09:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:19:49.078 00:19:49.079 real 0m5.258s 00:19:49.079 user 0m7.383s 00:19:49.079 sys 0m1.348s 00:19:49.079 09:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:49.079 ************************************ 00:19:49.079 END TEST raid_write_error_test 00:19:49.079 ************************************ 00:19:49.079 09:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.336 09:46:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:49.336 09:46:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:49.336 09:46:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:19:49.336 09:46:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:49.336 09:46:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.336 09:46:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.336 ************************************ 00:19:49.336 START TEST raid_state_function_test 00:19:49.336 ************************************ 00:19:49.336 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:19:49.336 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50760 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50760' 00:19:49.337 Process raid pid: 50760 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50760 /var/tmp/spdk-raid.sock 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50760 ']' 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.337 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.337 [2024-07-15 09:46:17.230594] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:49.337 [2024-07-15 09:46:17.230908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:49.902 EAL: TSC is not safe to use in SMP mode 00:19:49.902 EAL: TSC is not invariant 00:19:49.902 [2024-07-15 09:46:17.944183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.159 [2024-07-15 09:46:18.059818] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:50.160 [2024-07-15 09:46:18.062327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.160 [2024-07-15 09:46:18.063049] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.160 [2024-07-15 09:46:18.063061] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:50.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:50.418 [2024-07-15 09:46:18.353976] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.418 [2024-07-15 09:46:18.354034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.418 [2024-07-15 09:46:18.354039] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.418 [2024-07-15 09:46:18.354046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.418 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.675 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.675 "name": "Existed_Raid", 00:19:50.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.675 "strip_size_kb": 0, 00:19:50.675 "state": "configuring", 00:19:50.675 "raid_level": "raid1", 00:19:50.675 "superblock": false, 00:19:50.675 "num_base_bdevs": 2, 00:19:50.675 "num_base_bdevs_discovered": 0, 00:19:50.675 "num_base_bdevs_operational": 2, 00:19:50.675 "base_bdevs_list": [ 00:19:50.675 { 00:19:50.675 "name": "BaseBdev1", 00:19:50.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.675 "is_configured": false, 00:19:50.675 "data_offset": 0, 00:19:50.675 "data_size": 0 00:19:50.675 }, 00:19:50.675 { 00:19:50.675 "name": "BaseBdev2", 00:19:50.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.675 "is_configured": false, 00:19:50.676 "data_offset": 0, 00:19:50.676 "data_size": 0 00:19:50.676 } 00:19:50.676 ] 00:19:50.676 }' 00:19:50.676 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.676 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:51.192 [2024-07-15 09:46:19.030035] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.192 [2024-07-15 09:46:19.030075] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ebc88634500 name Existed_Raid, state configuring 00:19:51.192 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:51.192 [2024-07-15 09:46:19.214061] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.192 [2024-07-15 09:46:19.214115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.192 [2024-07-15 09:46:19.214118] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.192 [2024-07-15 09:46:19.214125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.192 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:51.451 [2024-07-15 09:46:19.419223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.451 BaseBdev1 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:51.451 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.709 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:51.967 [ 00:19:51.967 { 00:19:51.967 "name": "BaseBdev1", 00:19:51.967 "aliases": [ 00:19:51.967 "1654b367-428f-11ef-a0af-c98d8ee52a94" 00:19:51.967 ], 00:19:51.967 "product_name": "Malloc disk", 00:19:51.967 "block_size": 512, 00:19:51.967 "num_blocks": 65536, 00:19:51.967 "uuid": "1654b367-428f-11ef-a0af-c98d8ee52a94", 00:19:51.967 "assigned_rate_limits": { 00:19:51.967 "rw_ios_per_sec": 0, 00:19:51.967 "rw_mbytes_per_sec": 0, 00:19:51.967 "r_mbytes_per_sec": 0, 00:19:51.967 "w_mbytes_per_sec": 0 00:19:51.967 }, 00:19:51.967 "claimed": true, 00:19:51.967 "claim_type": "exclusive_write", 00:19:51.967 "zoned": false, 00:19:51.967 "supported_io_types": { 00:19:51.967 "read": true, 00:19:51.967 "write": true, 00:19:51.967 "unmap": true, 00:19:51.967 "flush": true, 00:19:51.967 "reset": true, 00:19:51.967 "nvme_admin": false, 00:19:51.967 "nvme_io": false, 00:19:51.967 "nvme_io_md": false, 00:19:51.967 "write_zeroes": true, 00:19:51.967 "zcopy": true, 00:19:51.967 "get_zone_info": false, 00:19:51.967 "zone_management": false, 00:19:51.967 "zone_append": false, 00:19:51.967 "compare": false, 00:19:51.967 "compare_and_write": false, 00:19:51.967 "abort": true, 00:19:51.967 "seek_hole": false, 00:19:51.967 "seek_data": false, 00:19:51.967 "copy": true, 00:19:51.967 "nvme_iov_md": false 00:19:51.967 }, 00:19:51.967 "memory_domains": [ 00:19:51.967 { 00:19:51.967 "dma_device_id": "system", 00:19:51.967 "dma_device_type": 1 00:19:51.967 }, 00:19:51.967 { 00:19:51.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.967 "dma_device_type": 2 00:19:51.967 } 00:19:51.967 ], 00:19:51.967 "driver_specific": {} 00:19:51.967 } 00:19:51.967 ] 00:19:51.967 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:51.967 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:51.967 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.967 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.967 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:51.967 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.968 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.968 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.968 "name": "Existed_Raid", 00:19:51.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.968 "strip_size_kb": 0, 00:19:51.968 "state": "configuring", 00:19:51.968 "raid_level": "raid1", 00:19:51.968 "superblock": false, 00:19:51.968 "num_base_bdevs": 2, 00:19:51.968 "num_base_bdevs_discovered": 1, 00:19:51.968 "num_base_bdevs_operational": 2, 00:19:51.968 "base_bdevs_list": [ 00:19:51.968 { 00:19:51.968 "name": "BaseBdev1", 00:19:51.968 "uuid": "1654b367-428f-11ef-a0af-c98d8ee52a94", 00:19:51.968 "is_configured": true, 00:19:51.968 "data_offset": 0, 00:19:51.968 "data_size": 65536 00:19:51.968 }, 00:19:51.968 { 00:19:51.968 "name": "BaseBdev2", 00:19:51.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.968 "is_configured": false, 00:19:51.968 "data_offset": 0, 00:19:51.968 "data_size": 0 00:19:51.968 } 00:19:51.968 ] 00:19:51.968 }' 00:19:51.968 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.968 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:52.792 [2024-07-15 09:46:20.714233] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.792 [2024-07-15 09:46:20.714266] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ebc88634500 name Existed_Raid, state configuring 00:19:52.792 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:53.050 [2024-07-15 09:46:21.002277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.050 [2024-07-15 09:46:21.003190] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.050 [2024-07-15 09:46:21.003232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.050 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.309 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.309 "name": "Existed_Raid", 00:19:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.309 "strip_size_kb": 0, 00:19:53.309 "state": "configuring", 00:19:53.309 "raid_level": "raid1", 00:19:53.309 "superblock": false, 00:19:53.309 "num_base_bdevs": 2, 00:19:53.309 "num_base_bdevs_discovered": 1, 00:19:53.309 "num_base_bdevs_operational": 2, 00:19:53.309 "base_bdevs_list": [ 00:19:53.309 { 00:19:53.309 "name": "BaseBdev1", 00:19:53.309 "uuid": "1654b367-428f-11ef-a0af-c98d8ee52a94", 00:19:53.309 "is_configured": true, 00:19:53.309 "data_offset": 0, 00:19:53.309 "data_size": 65536 00:19:53.309 }, 00:19:53.309 { 00:19:53.309 "name": "BaseBdev2", 00:19:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.309 "is_configured": false, 00:19:53.309 "data_offset": 0, 00:19:53.309 "data_size": 0 00:19:53.309 } 00:19:53.309 ] 00:19:53.309 }' 00:19:53.309 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.309 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:53.875 [2024-07-15 09:46:21.878524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:53.875 [2024-07-15 09:46:21.878559] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ebc88634a00 00:19:53.875 [2024-07-15 09:46:21.878563] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:53.875 [2024-07-15 09:46:21.878582] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ebc88697e20 00:19:53.875 [2024-07-15 09:46:21.878679] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ebc88634a00 00:19:53.875 [2024-07-15 09:46:21.878682] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3ebc88634a00 00:19:53.875 [2024-07-15 09:46:21.878712] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.875 BaseBdev2 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:53.875 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.133 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:54.392 [ 00:19:54.392 { 00:19:54.392 "name": "BaseBdev2", 00:19:54.392 "aliases": [ 00:19:54.392 "17cc1c4a-428f-11ef-a0af-c98d8ee52a94" 00:19:54.392 ], 00:19:54.392 "product_name": "Malloc disk", 00:19:54.392 "block_size": 512, 00:19:54.392 "num_blocks": 65536, 00:19:54.392 "uuid": "17cc1c4a-428f-11ef-a0af-c98d8ee52a94", 00:19:54.392 "assigned_rate_limits": { 00:19:54.392 "rw_ios_per_sec": 0, 00:19:54.392 "rw_mbytes_per_sec": 0, 00:19:54.392 "r_mbytes_per_sec": 0, 00:19:54.392 "w_mbytes_per_sec": 0 00:19:54.392 }, 00:19:54.392 "claimed": true, 00:19:54.392 "claim_type": "exclusive_write", 00:19:54.392 "zoned": false, 00:19:54.392 "supported_io_types": { 00:19:54.392 "read": true, 00:19:54.392 "write": true, 00:19:54.392 "unmap": true, 00:19:54.392 "flush": true, 00:19:54.392 "reset": true, 00:19:54.392 "nvme_admin": false, 00:19:54.392 "nvme_io": false, 00:19:54.392 "nvme_io_md": false, 00:19:54.392 "write_zeroes": true, 00:19:54.392 "zcopy": true, 00:19:54.392 "get_zone_info": false, 00:19:54.392 "zone_management": false, 00:19:54.392 "zone_append": false, 00:19:54.392 "compare": false, 00:19:54.392 "compare_and_write": false, 00:19:54.392 "abort": true, 00:19:54.392 "seek_hole": false, 00:19:54.392 "seek_data": false, 00:19:54.392 "copy": true, 00:19:54.392 "nvme_iov_md": false 00:19:54.392 }, 00:19:54.392 "memory_domains": [ 00:19:54.392 { 00:19:54.392 "dma_device_id": "system", 00:19:54.392 "dma_device_type": 1 00:19:54.392 }, 00:19:54.392 { 00:19:54.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.392 "dma_device_type": 2 00:19:54.392 } 00:19:54.392 ], 00:19:54.392 "driver_specific": {} 00:19:54.392 } 00:19:54.392 ] 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.392 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.958 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.958 "name": "Existed_Raid", 00:19:54.958 "uuid": "17cc2390-428f-11ef-a0af-c98d8ee52a94", 00:19:54.958 "strip_size_kb": 0, 00:19:54.958 "state": "online", 00:19:54.958 "raid_level": "raid1", 00:19:54.958 "superblock": false, 00:19:54.958 "num_base_bdevs": 2, 00:19:54.958 "num_base_bdevs_discovered": 2, 00:19:54.958 "num_base_bdevs_operational": 2, 00:19:54.958 "base_bdevs_list": [ 00:19:54.958 { 00:19:54.958 "name": "BaseBdev1", 00:19:54.958 "uuid": "1654b367-428f-11ef-a0af-c98d8ee52a94", 00:19:54.958 "is_configured": true, 00:19:54.958 "data_offset": 0, 00:19:54.958 "data_size": 65536 00:19:54.958 }, 00:19:54.958 { 00:19:54.958 "name": "BaseBdev2", 00:19:54.958 "uuid": "17cc1c4a-428f-11ef-a0af-c98d8ee52a94", 00:19:54.958 "is_configured": true, 00:19:54.958 "data_offset": 0, 00:19:54.958 "data_size": 65536 00:19:54.958 } 00:19:54.958 ] 00:19:54.958 }' 00:19:54.958 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.958 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:54.958 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:55.215 [2024-07-15 09:46:23.238540] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.215 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:55.215 "name": "Existed_Raid", 00:19:55.215 "aliases": [ 00:19:55.215 "17cc2390-428f-11ef-a0af-c98d8ee52a94" 00:19:55.215 ], 00:19:55.215 "product_name": "Raid Volume", 00:19:55.215 "block_size": 512, 00:19:55.215 "num_blocks": 65536, 00:19:55.215 "uuid": "17cc2390-428f-11ef-a0af-c98d8ee52a94", 00:19:55.215 "assigned_rate_limits": { 00:19:55.215 "rw_ios_per_sec": 0, 00:19:55.215 "rw_mbytes_per_sec": 0, 00:19:55.215 "r_mbytes_per_sec": 0, 00:19:55.215 "w_mbytes_per_sec": 0 00:19:55.215 }, 00:19:55.215 "claimed": false, 00:19:55.215 "zoned": false, 00:19:55.215 "supported_io_types": { 00:19:55.215 "read": true, 00:19:55.215 "write": true, 00:19:55.215 "unmap": false, 00:19:55.215 "flush": false, 00:19:55.215 "reset": true, 00:19:55.215 "nvme_admin": false, 00:19:55.215 "nvme_io": false, 00:19:55.215 "nvme_io_md": false, 00:19:55.215 "write_zeroes": true, 00:19:55.215 "zcopy": false, 00:19:55.216 "get_zone_info": false, 00:19:55.216 "zone_management": false, 00:19:55.216 "zone_append": false, 00:19:55.216 "compare": false, 00:19:55.216 "compare_and_write": false, 00:19:55.216 "abort": false, 00:19:55.216 "seek_hole": false, 00:19:55.216 "seek_data": false, 00:19:55.216 "copy": false, 00:19:55.216 "nvme_iov_md": false 00:19:55.216 }, 00:19:55.216 "memory_domains": [ 00:19:55.216 { 00:19:55.216 "dma_device_id": "system", 00:19:55.216 "dma_device_type": 1 00:19:55.216 }, 00:19:55.216 { 00:19:55.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.216 "dma_device_type": 2 00:19:55.216 }, 00:19:55.216 { 00:19:55.216 "dma_device_id": "system", 00:19:55.216 "dma_device_type": 1 00:19:55.216 }, 00:19:55.216 { 00:19:55.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.216 "dma_device_type": 2 00:19:55.216 } 00:19:55.216 ], 00:19:55.216 "driver_specific": { 00:19:55.216 "raid": { 00:19:55.216 "uuid": "17cc2390-428f-11ef-a0af-c98d8ee52a94", 00:19:55.216 "strip_size_kb": 0, 00:19:55.216 "state": "online", 00:19:55.216 "raid_level": "raid1", 00:19:55.216 "superblock": false, 00:19:55.216 "num_base_bdevs": 2, 00:19:55.216 "num_base_bdevs_discovered": 2, 00:19:55.216 "num_base_bdevs_operational": 2, 00:19:55.216 "base_bdevs_list": [ 00:19:55.216 { 00:19:55.216 "name": "BaseBdev1", 00:19:55.216 "uuid": "1654b367-428f-11ef-a0af-c98d8ee52a94", 00:19:55.216 "is_configured": true, 00:19:55.216 "data_offset": 0, 00:19:55.216 "data_size": 65536 00:19:55.216 }, 00:19:55.216 { 00:19:55.216 "name": "BaseBdev2", 00:19:55.216 "uuid": "17cc1c4a-428f-11ef-a0af-c98d8ee52a94", 00:19:55.216 "is_configured": true, 00:19:55.216 "data_offset": 0, 00:19:55.216 "data_size": 65536 00:19:55.216 } 00:19:55.216 ] 00:19:55.216 } 00:19:55.216 } 00:19:55.216 }' 00:19:55.216 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.216 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:55.216 BaseBdev2' 00:19:55.216 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.216 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:55.216 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.474 "name": "BaseBdev1", 00:19:55.474 "aliases": [ 00:19:55.474 "1654b367-428f-11ef-a0af-c98d8ee52a94" 00:19:55.474 ], 00:19:55.474 "product_name": "Malloc disk", 00:19:55.474 "block_size": 512, 00:19:55.474 "num_blocks": 65536, 00:19:55.474 "uuid": "1654b367-428f-11ef-a0af-c98d8ee52a94", 00:19:55.474 "assigned_rate_limits": { 00:19:55.474 "rw_ios_per_sec": 0, 00:19:55.474 "rw_mbytes_per_sec": 0, 00:19:55.474 "r_mbytes_per_sec": 0, 00:19:55.474 "w_mbytes_per_sec": 0 00:19:55.474 }, 00:19:55.474 "claimed": true, 00:19:55.474 "claim_type": "exclusive_write", 00:19:55.474 "zoned": false, 00:19:55.474 "supported_io_types": { 00:19:55.474 "read": true, 00:19:55.474 "write": true, 00:19:55.474 "unmap": true, 00:19:55.474 "flush": true, 00:19:55.474 "reset": true, 00:19:55.474 "nvme_admin": false, 00:19:55.474 "nvme_io": false, 00:19:55.474 "nvme_io_md": false, 00:19:55.474 "write_zeroes": true, 00:19:55.474 "zcopy": true, 00:19:55.474 "get_zone_info": false, 00:19:55.474 "zone_management": false, 00:19:55.474 "zone_append": false, 00:19:55.474 "compare": false, 00:19:55.474 "compare_and_write": false, 00:19:55.474 "abort": true, 00:19:55.474 "seek_hole": false, 00:19:55.474 "seek_data": false, 00:19:55.474 "copy": true, 00:19:55.474 "nvme_iov_md": false 00:19:55.474 }, 00:19:55.474 "memory_domains": [ 00:19:55.474 { 00:19:55.474 "dma_device_id": "system", 00:19:55.474 "dma_device_type": 1 00:19:55.474 }, 00:19:55.474 { 00:19:55.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.474 "dma_device_type": 2 00:19:55.474 } 00:19:55.474 ], 00:19:55.474 "driver_specific": {} 00:19:55.474 }' 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:55.474 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.732 "name": "BaseBdev2", 00:19:55.732 "aliases": [ 00:19:55.732 "17cc1c4a-428f-11ef-a0af-c98d8ee52a94" 00:19:55.732 ], 00:19:55.732 "product_name": "Malloc disk", 00:19:55.732 "block_size": 512, 00:19:55.732 "num_blocks": 65536, 00:19:55.732 "uuid": "17cc1c4a-428f-11ef-a0af-c98d8ee52a94", 00:19:55.732 "assigned_rate_limits": { 00:19:55.732 "rw_ios_per_sec": 0, 00:19:55.732 "rw_mbytes_per_sec": 0, 00:19:55.732 "r_mbytes_per_sec": 0, 00:19:55.732 "w_mbytes_per_sec": 0 00:19:55.732 }, 00:19:55.732 "claimed": true, 00:19:55.732 "claim_type": "exclusive_write", 00:19:55.732 "zoned": false, 00:19:55.732 "supported_io_types": { 00:19:55.732 "read": true, 00:19:55.732 "write": true, 00:19:55.732 "unmap": true, 00:19:55.732 "flush": true, 00:19:55.732 "reset": true, 00:19:55.732 "nvme_admin": false, 00:19:55.732 "nvme_io": false, 00:19:55.732 "nvme_io_md": false, 00:19:55.732 "write_zeroes": true, 00:19:55.732 "zcopy": true, 00:19:55.732 "get_zone_info": false, 00:19:55.732 "zone_management": false, 00:19:55.732 "zone_append": false, 00:19:55.732 "compare": false, 00:19:55.732 "compare_and_write": false, 00:19:55.732 "abort": true, 00:19:55.732 "seek_hole": false, 00:19:55.732 "seek_data": false, 00:19:55.732 "copy": true, 00:19:55.732 "nvme_iov_md": false 00:19:55.732 }, 00:19:55.732 "memory_domains": [ 00:19:55.732 { 00:19:55.732 "dma_device_id": "system", 00:19:55.732 "dma_device_type": 1 00:19:55.732 }, 00:19:55.732 { 00:19:55.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.732 "dma_device_type": 2 00:19:55.732 } 00:19:55.732 ], 00:19:55.732 "driver_specific": {} 00:19:55.732 }' 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.732 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.990 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.990 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.990 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:55.990 [2024-07-15 09:46:24.026615] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.990 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.248 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.248 "name": "Existed_Raid", 00:19:56.248 "uuid": "17cc2390-428f-11ef-a0af-c98d8ee52a94", 00:19:56.248 "strip_size_kb": 0, 00:19:56.248 "state": "online", 00:19:56.248 "raid_level": "raid1", 00:19:56.248 "superblock": false, 00:19:56.248 "num_base_bdevs": 2, 00:19:56.248 "num_base_bdevs_discovered": 1, 00:19:56.248 "num_base_bdevs_operational": 1, 00:19:56.248 "base_bdevs_list": [ 00:19:56.248 { 00:19:56.248 "name": null, 00:19:56.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.248 "is_configured": false, 00:19:56.248 "data_offset": 0, 00:19:56.248 "data_size": 65536 00:19:56.248 }, 00:19:56.248 { 00:19:56.248 "name": "BaseBdev2", 00:19:56.248 "uuid": "17cc1c4a-428f-11ef-a0af-c98d8ee52a94", 00:19:56.248 "is_configured": true, 00:19:56.248 "data_offset": 0, 00:19:56.248 "data_size": 65536 00:19:56.248 } 00:19:56.248 ] 00:19:56.248 }' 00:19:56.248 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.248 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.507 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:56.507 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:56.507 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.507 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:56.767 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:56.767 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.767 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:57.026 [2024-07-15 09:46:24.911117] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:57.026 [2024-07-15 09:46:24.911168] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.026 [2024-07-15 09:46:24.919768] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.026 [2024-07-15 09:46:24.919787] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.026 [2024-07-15 09:46:24.919791] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ebc88634a00 name Existed_Raid, state offline 00:19:57.026 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:57.026 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:57.026 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.026 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50760 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50760 ']' 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50760 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50760 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:19:57.284 killing process with pid 50760 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50760' 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50760 00:19:57.284 [2024-07-15 09:46:25.141888] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.284 [2024-07-15 09:46:25.141928] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:57.284 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50760 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:57.542 00:19:57.542 real 0m8.184s 00:19:57.542 user 0m13.625s 00:19:57.542 sys 0m1.967s 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.542 ************************************ 00:19:57.542 END TEST raid_state_function_test 00:19:57.542 ************************************ 00:19:57.542 09:46:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:57.542 09:46:25 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:19:57.542 09:46:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:57.542 09:46:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.542 09:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:57.542 ************************************ 00:19:57.542 START TEST raid_state_function_test_sb 00:19:57.542 ************************************ 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51031 00:19:57.542 Process raid pid: 51031 00:19:57.542 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51031' 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51031 /var/tmp/spdk-raid.sock 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 51031 ']' 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.543 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.543 [2024-07-15 09:46:25.470321] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:19:57.543 [2024-07-15 09:46:25.470589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:58.109 EAL: TSC is not safe to use in SMP mode 00:19:58.109 EAL: TSC is not invariant 00:19:58.109 [2024-07-15 09:46:26.183322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.366 [2024-07-15 09:46:26.322940] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:58.366 [2024-07-15 09:46:26.325470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.366 [2024-07-15 09:46:26.326198] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.366 [2024-07-15 09:46:26.326211] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.367 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.367 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:58.367 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:58.625 [2024-07-15 09:46:26.537349] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:58.625 [2024-07-15 09:46:26.537416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:58.625 [2024-07-15 09:46:26.537421] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.625 [2024-07-15 09:46:26.537428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.625 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.883 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.883 "name": "Existed_Raid", 00:19:58.883 "uuid": "1a9302e7-428f-11ef-a0af-c98d8ee52a94", 00:19:58.883 "strip_size_kb": 0, 00:19:58.883 "state": "configuring", 00:19:58.883 "raid_level": "raid1", 00:19:58.883 "superblock": true, 00:19:58.883 "num_base_bdevs": 2, 00:19:58.883 "num_base_bdevs_discovered": 0, 00:19:58.884 "num_base_bdevs_operational": 2, 00:19:58.884 "base_bdevs_list": [ 00:19:58.884 { 00:19:58.884 "name": "BaseBdev1", 00:19:58.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.884 "is_configured": false, 00:19:58.884 "data_offset": 0, 00:19:58.884 "data_size": 0 00:19:58.884 }, 00:19:58.884 { 00:19:58.884 "name": "BaseBdev2", 00:19:58.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.884 "is_configured": false, 00:19:58.884 "data_offset": 0, 00:19:58.884 "data_size": 0 00:19:58.884 } 00:19:58.884 ] 00:19:58.884 }' 00:19:58.884 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.884 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.142 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:59.400 [2024-07-15 09:46:27.301372] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.400 [2024-07-15 09:46:27.301399] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x359d95a34500 name Existed_Raid, state configuring 00:19:59.400 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:59.400 [2024-07-15 09:46:27.481396] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.400 [2024-07-15 09:46:27.481445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.400 [2024-07-15 09:46:27.481448] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.400 [2024-07-15 09:46:27.481455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:59.658 [2024-07-15 09:46:27.686548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.658 BaseBdev1 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:59.658 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.915 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:00.173 [ 00:20:00.173 { 00:20:00.173 "name": "BaseBdev1", 00:20:00.173 "aliases": [ 00:20:00.173 "1b4231d5-428f-11ef-a0af-c98d8ee52a94" 00:20:00.173 ], 00:20:00.173 "product_name": "Malloc disk", 00:20:00.173 "block_size": 512, 00:20:00.173 "num_blocks": 65536, 00:20:00.173 "uuid": "1b4231d5-428f-11ef-a0af-c98d8ee52a94", 00:20:00.173 "assigned_rate_limits": { 00:20:00.173 "rw_ios_per_sec": 0, 00:20:00.173 "rw_mbytes_per_sec": 0, 00:20:00.173 "r_mbytes_per_sec": 0, 00:20:00.173 "w_mbytes_per_sec": 0 00:20:00.173 }, 00:20:00.173 "claimed": true, 00:20:00.173 "claim_type": "exclusive_write", 00:20:00.173 "zoned": false, 00:20:00.173 "supported_io_types": { 00:20:00.173 "read": true, 00:20:00.173 "write": true, 00:20:00.173 "unmap": true, 00:20:00.173 "flush": true, 00:20:00.173 "reset": true, 00:20:00.173 "nvme_admin": false, 00:20:00.173 "nvme_io": false, 00:20:00.173 "nvme_io_md": false, 00:20:00.173 "write_zeroes": true, 00:20:00.173 "zcopy": true, 00:20:00.173 "get_zone_info": false, 00:20:00.173 "zone_management": false, 00:20:00.173 "zone_append": false, 00:20:00.173 "compare": false, 00:20:00.173 "compare_and_write": false, 00:20:00.173 "abort": true, 00:20:00.173 "seek_hole": false, 00:20:00.173 "seek_data": false, 00:20:00.173 "copy": true, 00:20:00.173 "nvme_iov_md": false 00:20:00.173 }, 00:20:00.173 "memory_domains": [ 00:20:00.173 { 00:20:00.173 "dma_device_id": "system", 00:20:00.173 "dma_device_type": 1 00:20:00.173 }, 00:20:00.173 { 00:20:00.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.173 "dma_device_type": 2 00:20:00.173 } 00:20:00.173 ], 00:20:00.173 "driver_specific": {} 00:20:00.173 } 00:20:00.173 ] 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.173 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.174 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.431 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.431 "name": "Existed_Raid", 00:20:00.431 "uuid": "1b230fd1-428f-11ef-a0af-c98d8ee52a94", 00:20:00.431 "strip_size_kb": 0, 00:20:00.431 "state": "configuring", 00:20:00.431 "raid_level": "raid1", 00:20:00.431 "superblock": true, 00:20:00.431 "num_base_bdevs": 2, 00:20:00.431 "num_base_bdevs_discovered": 1, 00:20:00.431 "num_base_bdevs_operational": 2, 00:20:00.431 "base_bdevs_list": [ 00:20:00.431 { 00:20:00.431 "name": "BaseBdev1", 00:20:00.431 "uuid": "1b4231d5-428f-11ef-a0af-c98d8ee52a94", 00:20:00.431 "is_configured": true, 00:20:00.431 "data_offset": 2048, 00:20:00.431 "data_size": 63488 00:20:00.431 }, 00:20:00.431 { 00:20:00.431 "name": "BaseBdev2", 00:20:00.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.431 "is_configured": false, 00:20:00.431 "data_offset": 0, 00:20:00.431 "data_size": 0 00:20:00.431 } 00:20:00.431 ] 00:20:00.431 }' 00:20:00.431 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.431 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.690 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:00.956 [2024-07-15 09:46:28.841528] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.956 [2024-07-15 09:46:28.841559] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x359d95a34500 name Existed_Raid, state configuring 00:20:00.956 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:00.956 [2024-07-15 09:46:29.037559] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.956 [2024-07-15 09:46:29.038491] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.956 [2024-07-15 09:46:29.038533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.213 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:01.213 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:01.213 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:01.213 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:01.213 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.213 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.214 "name": "Existed_Raid", 00:20:01.214 "uuid": "1c108370-428f-11ef-a0af-c98d8ee52a94", 00:20:01.214 "strip_size_kb": 0, 00:20:01.214 "state": "configuring", 00:20:01.214 "raid_level": "raid1", 00:20:01.214 "superblock": true, 00:20:01.214 "num_base_bdevs": 2, 00:20:01.214 "num_base_bdevs_discovered": 1, 00:20:01.214 "num_base_bdevs_operational": 2, 00:20:01.214 "base_bdevs_list": [ 00:20:01.214 { 00:20:01.214 "name": "BaseBdev1", 00:20:01.214 "uuid": "1b4231d5-428f-11ef-a0af-c98d8ee52a94", 00:20:01.214 "is_configured": true, 00:20:01.214 "data_offset": 2048, 00:20:01.214 "data_size": 63488 00:20:01.214 }, 00:20:01.214 { 00:20:01.214 "name": "BaseBdev2", 00:20:01.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.214 "is_configured": false, 00:20:01.214 "data_offset": 0, 00:20:01.214 "data_size": 0 00:20:01.214 } 00:20:01.214 ] 00:20:01.214 }' 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.214 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.471 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:01.729 [2024-07-15 09:46:29.725760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:01.729 [2024-07-15 09:46:29.725822] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x359d95a34a00 00:20:01.729 [2024-07-15 09:46:29.725827] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:01.729 [2024-07-15 09:46:29.725844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x359d95a97e20 00:20:01.729 [2024-07-15 09:46:29.725879] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x359d95a34a00 00:20:01.729 [2024-07-15 09:46:29.725883] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x359d95a34a00 00:20:01.729 [2024-07-15 09:46:29.725900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.729 BaseBdev2 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:01.729 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.986 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.244 [ 00:20:02.244 { 00:20:02.244 "name": "BaseBdev2", 00:20:02.244 "aliases": [ 00:20:02.244 "1c798154-428f-11ef-a0af-c98d8ee52a94" 00:20:02.244 ], 00:20:02.244 "product_name": "Malloc disk", 00:20:02.244 "block_size": 512, 00:20:02.244 "num_blocks": 65536, 00:20:02.244 "uuid": "1c798154-428f-11ef-a0af-c98d8ee52a94", 00:20:02.244 "assigned_rate_limits": { 00:20:02.245 "rw_ios_per_sec": 0, 00:20:02.245 "rw_mbytes_per_sec": 0, 00:20:02.245 "r_mbytes_per_sec": 0, 00:20:02.245 "w_mbytes_per_sec": 0 00:20:02.245 }, 00:20:02.245 "claimed": true, 00:20:02.245 "claim_type": "exclusive_write", 00:20:02.245 "zoned": false, 00:20:02.245 "supported_io_types": { 00:20:02.245 "read": true, 00:20:02.245 "write": true, 00:20:02.245 "unmap": true, 00:20:02.245 "flush": true, 00:20:02.245 "reset": true, 00:20:02.245 "nvme_admin": false, 00:20:02.245 "nvme_io": false, 00:20:02.245 "nvme_io_md": false, 00:20:02.245 "write_zeroes": true, 00:20:02.245 "zcopy": true, 00:20:02.245 "get_zone_info": false, 00:20:02.245 "zone_management": false, 00:20:02.245 "zone_append": false, 00:20:02.245 "compare": false, 00:20:02.245 "compare_and_write": false, 00:20:02.245 "abort": true, 00:20:02.245 "seek_hole": false, 00:20:02.245 "seek_data": false, 00:20:02.245 "copy": true, 00:20:02.245 "nvme_iov_md": false 00:20:02.245 }, 00:20:02.245 "memory_domains": [ 00:20:02.245 { 00:20:02.245 "dma_device_id": "system", 00:20:02.245 "dma_device_type": 1 00:20:02.245 }, 00:20:02.245 { 00:20:02.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.245 "dma_device_type": 2 00:20:02.245 } 00:20:02.245 ], 00:20:02.245 "driver_specific": {} 00:20:02.245 } 00:20:02.245 ] 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.245 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.502 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.502 "name": "Existed_Raid", 00:20:02.502 "uuid": "1c108370-428f-11ef-a0af-c98d8ee52a94", 00:20:02.502 "strip_size_kb": 0, 00:20:02.502 "state": "online", 00:20:02.502 "raid_level": "raid1", 00:20:02.502 "superblock": true, 00:20:02.502 "num_base_bdevs": 2, 00:20:02.502 "num_base_bdevs_discovered": 2, 00:20:02.502 "num_base_bdevs_operational": 2, 00:20:02.502 "base_bdevs_list": [ 00:20:02.502 { 00:20:02.502 "name": "BaseBdev1", 00:20:02.502 "uuid": "1b4231d5-428f-11ef-a0af-c98d8ee52a94", 00:20:02.502 "is_configured": true, 00:20:02.502 "data_offset": 2048, 00:20:02.502 "data_size": 63488 00:20:02.502 }, 00:20:02.502 { 00:20:02.502 "name": "BaseBdev2", 00:20:02.502 "uuid": "1c798154-428f-11ef-a0af-c98d8ee52a94", 00:20:02.502 "is_configured": true, 00:20:02.502 "data_offset": 2048, 00:20:02.502 "data_size": 63488 00:20:02.502 } 00:20:02.502 ] 00:20:02.502 }' 00:20:02.502 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.502 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:02.761 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:02.761 [2024-07-15 09:46:30.845983] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.019 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:03.020 "name": "Existed_Raid", 00:20:03.020 "aliases": [ 00:20:03.020 "1c108370-428f-11ef-a0af-c98d8ee52a94" 00:20:03.020 ], 00:20:03.020 "product_name": "Raid Volume", 00:20:03.020 "block_size": 512, 00:20:03.020 "num_blocks": 63488, 00:20:03.020 "uuid": "1c108370-428f-11ef-a0af-c98d8ee52a94", 00:20:03.020 "assigned_rate_limits": { 00:20:03.020 "rw_ios_per_sec": 0, 00:20:03.020 "rw_mbytes_per_sec": 0, 00:20:03.020 "r_mbytes_per_sec": 0, 00:20:03.020 "w_mbytes_per_sec": 0 00:20:03.020 }, 00:20:03.020 "claimed": false, 00:20:03.020 "zoned": false, 00:20:03.020 "supported_io_types": { 00:20:03.020 "read": true, 00:20:03.020 "write": true, 00:20:03.020 "unmap": false, 00:20:03.020 "flush": false, 00:20:03.020 "reset": true, 00:20:03.020 "nvme_admin": false, 00:20:03.020 "nvme_io": false, 00:20:03.020 "nvme_io_md": false, 00:20:03.020 "write_zeroes": true, 00:20:03.020 "zcopy": false, 00:20:03.020 "get_zone_info": false, 00:20:03.020 "zone_management": false, 00:20:03.020 "zone_append": false, 00:20:03.020 "compare": false, 00:20:03.020 "compare_and_write": false, 00:20:03.020 "abort": false, 00:20:03.020 "seek_hole": false, 00:20:03.020 "seek_data": false, 00:20:03.020 "copy": false, 00:20:03.020 "nvme_iov_md": false 00:20:03.020 }, 00:20:03.020 "memory_domains": [ 00:20:03.020 { 00:20:03.020 "dma_device_id": "system", 00:20:03.020 "dma_device_type": 1 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.020 "dma_device_type": 2 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "dma_device_id": "system", 00:20:03.020 "dma_device_type": 1 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.020 "dma_device_type": 2 00:20:03.020 } 00:20:03.020 ], 00:20:03.020 "driver_specific": { 00:20:03.020 "raid": { 00:20:03.020 "uuid": "1c108370-428f-11ef-a0af-c98d8ee52a94", 00:20:03.020 "strip_size_kb": 0, 00:20:03.020 "state": "online", 00:20:03.020 "raid_level": "raid1", 00:20:03.020 "superblock": true, 00:20:03.020 "num_base_bdevs": 2, 00:20:03.020 "num_base_bdevs_discovered": 2, 00:20:03.020 "num_base_bdevs_operational": 2, 00:20:03.020 "base_bdevs_list": [ 00:20:03.020 { 00:20:03.020 "name": "BaseBdev1", 00:20:03.020 "uuid": "1b4231d5-428f-11ef-a0af-c98d8ee52a94", 00:20:03.020 "is_configured": true, 00:20:03.020 "data_offset": 2048, 00:20:03.020 "data_size": 63488 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "name": "BaseBdev2", 00:20:03.020 "uuid": "1c798154-428f-11ef-a0af-c98d8ee52a94", 00:20:03.020 "is_configured": true, 00:20:03.020 "data_offset": 2048, 00:20:03.020 "data_size": 63488 00:20:03.020 } 00:20:03.020 ] 00:20:03.020 } 00:20:03.020 } 00:20:03.020 }' 00:20:03.020 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.020 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:03.020 BaseBdev2' 00:20:03.020 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.020 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:03.020 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.020 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.020 "name": "BaseBdev1", 00:20:03.020 "aliases": [ 00:20:03.020 "1b4231d5-428f-11ef-a0af-c98d8ee52a94" 00:20:03.020 ], 00:20:03.020 "product_name": "Malloc disk", 00:20:03.020 "block_size": 512, 00:20:03.020 "num_blocks": 65536, 00:20:03.020 "uuid": "1b4231d5-428f-11ef-a0af-c98d8ee52a94", 00:20:03.020 "assigned_rate_limits": { 00:20:03.020 "rw_ios_per_sec": 0, 00:20:03.020 "rw_mbytes_per_sec": 0, 00:20:03.020 "r_mbytes_per_sec": 0, 00:20:03.020 "w_mbytes_per_sec": 0 00:20:03.020 }, 00:20:03.020 "claimed": true, 00:20:03.020 "claim_type": "exclusive_write", 00:20:03.020 "zoned": false, 00:20:03.020 "supported_io_types": { 00:20:03.020 "read": true, 00:20:03.020 "write": true, 00:20:03.020 "unmap": true, 00:20:03.020 "flush": true, 00:20:03.020 "reset": true, 00:20:03.020 "nvme_admin": false, 00:20:03.020 "nvme_io": false, 00:20:03.020 "nvme_io_md": false, 00:20:03.020 "write_zeroes": true, 00:20:03.020 "zcopy": true, 00:20:03.020 "get_zone_info": false, 00:20:03.020 "zone_management": false, 00:20:03.020 "zone_append": false, 00:20:03.020 "compare": false, 00:20:03.020 "compare_and_write": false, 00:20:03.020 "abort": true, 00:20:03.020 "seek_hole": false, 00:20:03.020 "seek_data": false, 00:20:03.020 "copy": true, 00:20:03.020 "nvme_iov_md": false 00:20:03.020 }, 00:20:03.020 "memory_domains": [ 00:20:03.020 { 00:20:03.020 "dma_device_id": "system", 00:20:03.020 "dma_device_type": 1 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.020 "dma_device_type": 2 00:20:03.020 } 00:20:03.020 ], 00:20:03.020 "driver_specific": {} 00:20:03.020 }' 00:20:03.020 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.020 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.020 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.020 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.020 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:03.278 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.536 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.536 "name": "BaseBdev2", 00:20:03.536 "aliases": [ 00:20:03.536 "1c798154-428f-11ef-a0af-c98d8ee52a94" 00:20:03.536 ], 00:20:03.536 "product_name": "Malloc disk", 00:20:03.536 "block_size": 512, 00:20:03.536 "num_blocks": 65536, 00:20:03.536 "uuid": "1c798154-428f-11ef-a0af-c98d8ee52a94", 00:20:03.536 "assigned_rate_limits": { 00:20:03.536 "rw_ios_per_sec": 0, 00:20:03.536 "rw_mbytes_per_sec": 0, 00:20:03.536 "r_mbytes_per_sec": 0, 00:20:03.536 "w_mbytes_per_sec": 0 00:20:03.536 }, 00:20:03.536 "claimed": true, 00:20:03.536 "claim_type": "exclusive_write", 00:20:03.536 "zoned": false, 00:20:03.536 "supported_io_types": { 00:20:03.536 "read": true, 00:20:03.536 "write": true, 00:20:03.536 "unmap": true, 00:20:03.536 "flush": true, 00:20:03.537 "reset": true, 00:20:03.537 "nvme_admin": false, 00:20:03.537 "nvme_io": false, 00:20:03.537 "nvme_io_md": false, 00:20:03.537 "write_zeroes": true, 00:20:03.537 "zcopy": true, 00:20:03.537 "get_zone_info": false, 00:20:03.537 "zone_management": false, 00:20:03.537 "zone_append": false, 00:20:03.537 "compare": false, 00:20:03.537 "compare_and_write": false, 00:20:03.537 "abort": true, 00:20:03.537 "seek_hole": false, 00:20:03.537 "seek_data": false, 00:20:03.537 "copy": true, 00:20:03.537 "nvme_iov_md": false 00:20:03.537 }, 00:20:03.537 "memory_domains": [ 00:20:03.537 { 00:20:03.537 "dma_device_id": "system", 00:20:03.537 "dma_device_type": 1 00:20:03.537 }, 00:20:03.537 { 00:20:03.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.537 "dma_device_type": 2 00:20:03.537 } 00:20:03.537 ], 00:20:03.537 "driver_specific": {} 00:20:03.537 }' 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.537 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:03.795 [2024-07-15 09:46:31.646264] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.795 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.796 "name": "Existed_Raid", 00:20:03.796 "uuid": "1c108370-428f-11ef-a0af-c98d8ee52a94", 00:20:03.796 "strip_size_kb": 0, 00:20:03.796 "state": "online", 00:20:03.796 "raid_level": "raid1", 00:20:03.796 "superblock": true, 00:20:03.796 "num_base_bdevs": 2, 00:20:03.796 "num_base_bdevs_discovered": 1, 00:20:03.796 "num_base_bdevs_operational": 1, 00:20:03.796 "base_bdevs_list": [ 00:20:03.796 { 00:20:03.796 "name": null, 00:20:03.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.796 "is_configured": false, 00:20:03.796 "data_offset": 2048, 00:20:03.796 "data_size": 63488 00:20:03.796 }, 00:20:03.796 { 00:20:03.796 "name": "BaseBdev2", 00:20:03.796 "uuid": "1c798154-428f-11ef-a0af-c98d8ee52a94", 00:20:03.796 "is_configured": true, 00:20:03.796 "data_offset": 2048, 00:20:03.796 "data_size": 63488 00:20:03.796 } 00:20:03.796 ] 00:20:03.796 }' 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.796 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.055 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:04.055 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:04.055 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.055 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:04.314 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:04.314 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:04.314 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:04.572 [2024-07-15 09:46:32.510908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:04.572 [2024-07-15 09:46:32.510950] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.572 [2024-07-15 09:46:32.519462] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.572 [2024-07-15 09:46:32.519478] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.572 [2024-07-15 09:46:32.519482] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x359d95a34a00 name Existed_Raid, state offline 00:20:04.572 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:04.572 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:04.572 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.572 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:04.831 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51031 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 51031 ']' 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 51031 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 51031 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:20:04.832 killing process with pid 51031 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51031' 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 51031 00:20:04.832 [2024-07-15 09:46:32.749669] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.832 [2024-07-15 09:46:32.749709] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.832 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 51031 00:20:05.090 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:05.091 00:20:05.091 real 0m7.550s 00:20:05.091 user 0m12.536s 00:20:05.091 sys 0m1.802s 00:20:05.091 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.091 ************************************ 00:20:05.091 END TEST raid_state_function_test_sb 00:20:05.091 ************************************ 00:20:05.091 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.091 09:46:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:05.091 09:46:33 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:20:05.091 09:46:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:05.091 09:46:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.091 09:46:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.091 ************************************ 00:20:05.091 START TEST raid_superblock_test 00:20:05.091 ************************************ 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51301 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51301 /var/tmp/spdk-raid.sock 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51301 ']' 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.091 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.091 [2024-07-15 09:46:33.067664] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:05.091 [2024-07-15 09:46:33.067978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:06.023 EAL: TSC is not safe to use in SMP mode 00:20:06.023 EAL: TSC is not invariant 00:20:06.023 [2024-07-15 09:46:33.766650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.023 [2024-07-15 09:46:33.879801] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:06.023 [2024-07-15 09:46:33.882159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.023 [2024-07-15 09:46:33.882837] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.023 [2024-07-15 09:46:33.882847] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.023 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.023 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.024 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:06.281 malloc1 00:20:06.281 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.281 [2024-07-15 09:46:34.361731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.281 [2024-07-15 09:46:34.361803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.281 [2024-07-15 09:46:34.361815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225a7fe34780 00:20:06.281 [2024-07-15 09:46:34.361822] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.281 [2024-07-15 09:46:34.362865] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.281 [2024-07-15 09:46:34.362897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.281 pt1 00:20:06.281 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:06.281 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:06.540 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:06.540 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:06.541 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:06.541 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.541 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.541 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.541 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:06.541 malloc2 00:20:06.541 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.806 [2024-07-15 09:46:34.781859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.806 [2024-07-15 09:46:34.781928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.806 [2024-07-15 09:46:34.781938] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225a7fe34c80 00:20:06.806 [2024-07-15 09:46:34.781945] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.806 [2024-07-15 09:46:34.782712] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.806 [2024-07-15 09:46:34.782744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.806 pt2 00:20:06.806 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:06.806 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:06.806 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:20:07.075 [2024-07-15 09:46:34.969924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.075 [2024-07-15 09:46:34.970580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.075 [2024-07-15 09:46:34.970643] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x225a7fe34f00 00:20:07.075 [2024-07-15 09:46:34.970648] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:07.075 [2024-07-15 09:46:34.970686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225a7fe97e20 00:20:07.075 [2024-07-15 09:46:34.970758] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x225a7fe34f00 00:20:07.075 [2024-07-15 09:46:34.970761] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x225a7fe34f00 00:20:07.075 [2024-07-15 09:46:34.970787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.075 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.332 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.332 "name": "raid_bdev1", 00:20:07.332 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:07.332 "strip_size_kb": 0, 00:20:07.332 "state": "online", 00:20:07.332 "raid_level": "raid1", 00:20:07.332 "superblock": true, 00:20:07.332 "num_base_bdevs": 2, 00:20:07.332 "num_base_bdevs_discovered": 2, 00:20:07.332 "num_base_bdevs_operational": 2, 00:20:07.332 "base_bdevs_list": [ 00:20:07.332 { 00:20:07.332 "name": "pt1", 00:20:07.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.332 "is_configured": true, 00:20:07.332 "data_offset": 2048, 00:20:07.332 "data_size": 63488 00:20:07.332 }, 00:20:07.332 { 00:20:07.332 "name": "pt2", 00:20:07.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.332 "is_configured": true, 00:20:07.332 "data_offset": 2048, 00:20:07.332 "data_size": 63488 00:20:07.332 } 00:20:07.332 ] 00:20:07.332 }' 00:20:07.332 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.332 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:07.590 [2024-07-15 09:46:35.650200] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:07.590 "name": "raid_bdev1", 00:20:07.590 "aliases": [ 00:20:07.590 "1f99b856-428f-11ef-a0af-c98d8ee52a94" 00:20:07.590 ], 00:20:07.590 "product_name": "Raid Volume", 00:20:07.590 "block_size": 512, 00:20:07.590 "num_blocks": 63488, 00:20:07.590 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:07.590 "assigned_rate_limits": { 00:20:07.590 "rw_ios_per_sec": 0, 00:20:07.590 "rw_mbytes_per_sec": 0, 00:20:07.590 "r_mbytes_per_sec": 0, 00:20:07.590 "w_mbytes_per_sec": 0 00:20:07.590 }, 00:20:07.590 "claimed": false, 00:20:07.590 "zoned": false, 00:20:07.590 "supported_io_types": { 00:20:07.590 "read": true, 00:20:07.590 "write": true, 00:20:07.590 "unmap": false, 00:20:07.590 "flush": false, 00:20:07.590 "reset": true, 00:20:07.590 "nvme_admin": false, 00:20:07.590 "nvme_io": false, 00:20:07.590 "nvme_io_md": false, 00:20:07.590 "write_zeroes": true, 00:20:07.590 "zcopy": false, 00:20:07.590 "get_zone_info": false, 00:20:07.590 "zone_management": false, 00:20:07.590 "zone_append": false, 00:20:07.590 "compare": false, 00:20:07.590 "compare_and_write": false, 00:20:07.590 "abort": false, 00:20:07.590 "seek_hole": false, 00:20:07.590 "seek_data": false, 00:20:07.590 "copy": false, 00:20:07.590 "nvme_iov_md": false 00:20:07.590 }, 00:20:07.590 "memory_domains": [ 00:20:07.590 { 00:20:07.590 "dma_device_id": "system", 00:20:07.590 "dma_device_type": 1 00:20:07.590 }, 00:20:07.590 { 00:20:07.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.590 "dma_device_type": 2 00:20:07.590 }, 00:20:07.590 { 00:20:07.590 "dma_device_id": "system", 00:20:07.590 "dma_device_type": 1 00:20:07.590 }, 00:20:07.590 { 00:20:07.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.590 "dma_device_type": 2 00:20:07.590 } 00:20:07.590 ], 00:20:07.590 "driver_specific": { 00:20:07.590 "raid": { 00:20:07.590 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:07.590 "strip_size_kb": 0, 00:20:07.590 "state": "online", 00:20:07.590 "raid_level": "raid1", 00:20:07.590 "superblock": true, 00:20:07.590 "num_base_bdevs": 2, 00:20:07.590 "num_base_bdevs_discovered": 2, 00:20:07.590 "num_base_bdevs_operational": 2, 00:20:07.590 "base_bdevs_list": [ 00:20:07.590 { 00:20:07.590 "name": "pt1", 00:20:07.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.590 "is_configured": true, 00:20:07.590 "data_offset": 2048, 00:20:07.590 "data_size": 63488 00:20:07.590 }, 00:20:07.590 { 00:20:07.590 "name": "pt2", 00:20:07.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.590 "is_configured": true, 00:20:07.590 "data_offset": 2048, 00:20:07.590 "data_size": 63488 00:20:07.590 } 00:20:07.590 ] 00:20:07.590 } 00:20:07.590 } 00:20:07.590 }' 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:07.590 pt2' 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:07.590 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:07.849 "name": "pt1", 00:20:07.849 "aliases": [ 00:20:07.849 "00000000-0000-0000-0000-000000000001" 00:20:07.849 ], 00:20:07.849 "product_name": "passthru", 00:20:07.849 "block_size": 512, 00:20:07.849 "num_blocks": 65536, 00:20:07.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.849 "assigned_rate_limits": { 00:20:07.849 "rw_ios_per_sec": 0, 00:20:07.849 "rw_mbytes_per_sec": 0, 00:20:07.849 "r_mbytes_per_sec": 0, 00:20:07.849 "w_mbytes_per_sec": 0 00:20:07.849 }, 00:20:07.849 "claimed": true, 00:20:07.849 "claim_type": "exclusive_write", 00:20:07.849 "zoned": false, 00:20:07.849 "supported_io_types": { 00:20:07.849 "read": true, 00:20:07.849 "write": true, 00:20:07.849 "unmap": true, 00:20:07.849 "flush": true, 00:20:07.849 "reset": true, 00:20:07.849 "nvme_admin": false, 00:20:07.849 "nvme_io": false, 00:20:07.849 "nvme_io_md": false, 00:20:07.849 "write_zeroes": true, 00:20:07.849 "zcopy": true, 00:20:07.849 "get_zone_info": false, 00:20:07.849 "zone_management": false, 00:20:07.849 "zone_append": false, 00:20:07.849 "compare": false, 00:20:07.849 "compare_and_write": false, 00:20:07.849 "abort": true, 00:20:07.849 "seek_hole": false, 00:20:07.849 "seek_data": false, 00:20:07.849 "copy": true, 00:20:07.849 "nvme_iov_md": false 00:20:07.849 }, 00:20:07.849 "memory_domains": [ 00:20:07.849 { 00:20:07.849 "dma_device_id": "system", 00:20:07.849 "dma_device_type": 1 00:20:07.849 }, 00:20:07.849 { 00:20:07.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.849 "dma_device_type": 2 00:20:07.849 } 00:20:07.849 ], 00:20:07.849 "driver_specific": { 00:20:07.849 "passthru": { 00:20:07.849 "name": "pt1", 00:20:07.849 "base_bdev_name": "malloc1" 00:20:07.849 } 00:20:07.849 } 00:20:07.849 }' 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.849 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:08.108 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.108 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.108 "name": "pt2", 00:20:08.108 "aliases": [ 00:20:08.108 "00000000-0000-0000-0000-000000000002" 00:20:08.108 ], 00:20:08.108 "product_name": "passthru", 00:20:08.108 "block_size": 512, 00:20:08.108 "num_blocks": 65536, 00:20:08.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.108 "assigned_rate_limits": { 00:20:08.108 "rw_ios_per_sec": 0, 00:20:08.108 "rw_mbytes_per_sec": 0, 00:20:08.108 "r_mbytes_per_sec": 0, 00:20:08.108 "w_mbytes_per_sec": 0 00:20:08.108 }, 00:20:08.108 "claimed": true, 00:20:08.108 "claim_type": "exclusive_write", 00:20:08.108 "zoned": false, 00:20:08.108 "supported_io_types": { 00:20:08.108 "read": true, 00:20:08.108 "write": true, 00:20:08.108 "unmap": true, 00:20:08.108 "flush": true, 00:20:08.108 "reset": true, 00:20:08.108 "nvme_admin": false, 00:20:08.108 "nvme_io": false, 00:20:08.108 "nvme_io_md": false, 00:20:08.108 "write_zeroes": true, 00:20:08.108 "zcopy": true, 00:20:08.108 "get_zone_info": false, 00:20:08.108 "zone_management": false, 00:20:08.108 "zone_append": false, 00:20:08.108 "compare": false, 00:20:08.108 "compare_and_write": false, 00:20:08.108 "abort": true, 00:20:08.108 "seek_hole": false, 00:20:08.108 "seek_data": false, 00:20:08.108 "copy": true, 00:20:08.108 "nvme_iov_md": false 00:20:08.108 }, 00:20:08.108 "memory_domains": [ 00:20:08.108 { 00:20:08.108 "dma_device_id": "system", 00:20:08.108 "dma_device_type": 1 00:20:08.108 }, 00:20:08.108 { 00:20:08.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.108 "dma_device_type": 2 00:20:08.108 } 00:20:08.108 ], 00:20:08.108 "driver_specific": { 00:20:08.108 "passthru": { 00:20:08.108 "name": "pt2", 00:20:08.108 "base_bdev_name": "malloc2" 00:20:08.108 } 00:20:08.108 } 00:20:08.108 }' 00:20:08.108 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:08.366 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:08.622 [2024-07-15 09:46:36.474470] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.622 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1f99b856-428f-11ef-a0af-c98d8ee52a94 00:20:08.622 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 1f99b856-428f-11ef-a0af-c98d8ee52a94 ']' 00:20:08.622 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:08.622 [2024-07-15 09:46:36.678459] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.622 [2024-07-15 09:46:36.678487] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.622 [2024-07-15 09:46:36.678508] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.622 [2024-07-15 09:46:36.678523] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.622 [2024-07-15 09:46:36.678527] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225a7fe34f00 name raid_bdev1, state offline 00:20:08.622 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.622 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:08.879 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:08.879 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:08.879 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:08.879 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:09.136 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:09.136 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:09.395 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:09.395 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.653 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:09.654 [2024-07-15 09:46:37.698818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:09.654 [2024-07-15 09:46:37.699532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:09.654 [2024-07-15 09:46:37.699559] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:09.654 [2024-07-15 09:46:37.699611] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:09.654 [2024-07-15 09:46:37.699620] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.654 [2024-07-15 09:46:37.699625] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225a7fe34c80 name raid_bdev1, state configuring 00:20:09.654 request: 00:20:09.654 { 00:20:09.654 "name": "raid_bdev1", 00:20:09.654 "raid_level": "raid1", 00:20:09.654 "base_bdevs": [ 00:20:09.654 "malloc1", 00:20:09.654 "malloc2" 00:20:09.654 ], 00:20:09.654 "superblock": false, 00:20:09.654 "method": "bdev_raid_create", 00:20:09.654 "req_id": 1 00:20:09.654 } 00:20:09.654 Got JSON-RPC error response 00:20:09.654 response: 00:20:09.654 { 00:20:09.654 "code": -17, 00:20:09.654 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:09.654 } 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.654 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:09.911 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:09.912 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:09.912 09:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.170 [2024-07-15 09:46:38.098921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.170 [2024-07-15 09:46:38.098986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.170 [2024-07-15 09:46:38.098997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225a7fe34780 00:20:10.170 [2024-07-15 09:46:38.099004] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.170 [2024-07-15 09:46:38.099759] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.170 [2024-07-15 09:46:38.099793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.170 [2024-07-15 09:46:38.099817] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:10.170 [2024-07-15 09:46:38.099828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.170 pt1 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.170 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.428 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.428 "name": "raid_bdev1", 00:20:10.428 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:10.428 "strip_size_kb": 0, 00:20:10.428 "state": "configuring", 00:20:10.428 "raid_level": "raid1", 00:20:10.428 "superblock": true, 00:20:10.428 "num_base_bdevs": 2, 00:20:10.428 "num_base_bdevs_discovered": 1, 00:20:10.428 "num_base_bdevs_operational": 2, 00:20:10.428 "base_bdevs_list": [ 00:20:10.428 { 00:20:10.428 "name": "pt1", 00:20:10.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.428 "is_configured": true, 00:20:10.428 "data_offset": 2048, 00:20:10.428 "data_size": 63488 00:20:10.428 }, 00:20:10.428 { 00:20:10.428 "name": null, 00:20:10.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.428 "is_configured": false, 00:20:10.428 "data_offset": 2048, 00:20:10.428 "data_size": 63488 00:20:10.428 } 00:20:10.428 ] 00:20:10.428 }' 00:20:10.428 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.428 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.686 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:20:10.686 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:10.686 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:10.686 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:10.944 [2024-07-15 09:46:38.815153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:10.944 [2024-07-15 09:46:38.815229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.944 [2024-07-15 09:46:38.815240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225a7fe34f00 00:20:10.944 [2024-07-15 09:46:38.815248] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.944 [2024-07-15 09:46:38.815381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.944 [2024-07-15 09:46:38.815394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:10.944 [2024-07-15 09:46:38.815417] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:10.944 [2024-07-15 09:46:38.815425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.944 [2024-07-15 09:46:38.815454] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x225a7fe35180 00:20:10.944 [2024-07-15 09:46:38.815457] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:10.944 [2024-07-15 09:46:38.815474] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225a7fe97e20 00:20:10.944 [2024-07-15 09:46:38.815520] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x225a7fe35180 00:20:10.944 [2024-07-15 09:46:38.815523] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x225a7fe35180 00:20:10.944 [2024-07-15 09:46:38.815540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.944 pt2 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.944 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.944 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.944 "name": "raid_bdev1", 00:20:10.944 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:10.944 "strip_size_kb": 0, 00:20:10.944 "state": "online", 00:20:10.944 "raid_level": "raid1", 00:20:10.944 "superblock": true, 00:20:10.944 "num_base_bdevs": 2, 00:20:10.944 "num_base_bdevs_discovered": 2, 00:20:10.944 "num_base_bdevs_operational": 2, 00:20:10.944 "base_bdevs_list": [ 00:20:10.944 { 00:20:10.944 "name": "pt1", 00:20:10.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.944 "is_configured": true, 00:20:10.944 "data_offset": 2048, 00:20:10.944 "data_size": 63488 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "name": "pt2", 00:20:10.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.944 "is_configured": true, 00:20:10.944 "data_offset": 2048, 00:20:10.944 "data_size": 63488 00:20:10.944 } 00:20:10.944 ] 00:20:10.944 }' 00:20:10.944 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.944 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:11.511 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:11.511 [2024-07-15 09:46:39.607429] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:11.786 "name": "raid_bdev1", 00:20:11.786 "aliases": [ 00:20:11.786 "1f99b856-428f-11ef-a0af-c98d8ee52a94" 00:20:11.786 ], 00:20:11.786 "product_name": "Raid Volume", 00:20:11.786 "block_size": 512, 00:20:11.786 "num_blocks": 63488, 00:20:11.786 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:11.786 "assigned_rate_limits": { 00:20:11.786 "rw_ios_per_sec": 0, 00:20:11.786 "rw_mbytes_per_sec": 0, 00:20:11.786 "r_mbytes_per_sec": 0, 00:20:11.786 "w_mbytes_per_sec": 0 00:20:11.786 }, 00:20:11.786 "claimed": false, 00:20:11.786 "zoned": false, 00:20:11.786 "supported_io_types": { 00:20:11.786 "read": true, 00:20:11.786 "write": true, 00:20:11.786 "unmap": false, 00:20:11.786 "flush": false, 00:20:11.786 "reset": true, 00:20:11.786 "nvme_admin": false, 00:20:11.786 "nvme_io": false, 00:20:11.786 "nvme_io_md": false, 00:20:11.786 "write_zeroes": true, 00:20:11.786 "zcopy": false, 00:20:11.786 "get_zone_info": false, 00:20:11.786 "zone_management": false, 00:20:11.786 "zone_append": false, 00:20:11.786 "compare": false, 00:20:11.786 "compare_and_write": false, 00:20:11.786 "abort": false, 00:20:11.786 "seek_hole": false, 00:20:11.786 "seek_data": false, 00:20:11.786 "copy": false, 00:20:11.786 "nvme_iov_md": false 00:20:11.786 }, 00:20:11.786 "memory_domains": [ 00:20:11.786 { 00:20:11.786 "dma_device_id": "system", 00:20:11.786 "dma_device_type": 1 00:20:11.786 }, 00:20:11.786 { 00:20:11.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.786 "dma_device_type": 2 00:20:11.786 }, 00:20:11.786 { 00:20:11.786 "dma_device_id": "system", 00:20:11.786 "dma_device_type": 1 00:20:11.786 }, 00:20:11.786 { 00:20:11.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.786 "dma_device_type": 2 00:20:11.786 } 00:20:11.786 ], 00:20:11.786 "driver_specific": { 00:20:11.786 "raid": { 00:20:11.786 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:11.786 "strip_size_kb": 0, 00:20:11.786 "state": "online", 00:20:11.786 "raid_level": "raid1", 00:20:11.786 "superblock": true, 00:20:11.786 "num_base_bdevs": 2, 00:20:11.786 "num_base_bdevs_discovered": 2, 00:20:11.786 "num_base_bdevs_operational": 2, 00:20:11.786 "base_bdevs_list": [ 00:20:11.786 { 00:20:11.786 "name": "pt1", 00:20:11.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.786 "is_configured": true, 00:20:11.786 "data_offset": 2048, 00:20:11.786 "data_size": 63488 00:20:11.786 }, 00:20:11.786 { 00:20:11.786 "name": "pt2", 00:20:11.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.786 "is_configured": true, 00:20:11.786 "data_offset": 2048, 00:20:11.786 "data_size": 63488 00:20:11.786 } 00:20:11.786 ] 00:20:11.786 } 00:20:11.786 } 00:20:11.786 }' 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:11.786 pt2' 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.786 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.786 "name": "pt1", 00:20:11.786 "aliases": [ 00:20:11.786 "00000000-0000-0000-0000-000000000001" 00:20:11.786 ], 00:20:11.786 "product_name": "passthru", 00:20:11.786 "block_size": 512, 00:20:11.786 "num_blocks": 65536, 00:20:11.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.786 "assigned_rate_limits": { 00:20:11.786 "rw_ios_per_sec": 0, 00:20:11.786 "rw_mbytes_per_sec": 0, 00:20:11.786 "r_mbytes_per_sec": 0, 00:20:11.786 "w_mbytes_per_sec": 0 00:20:11.786 }, 00:20:11.786 "claimed": true, 00:20:11.786 "claim_type": "exclusive_write", 00:20:11.786 "zoned": false, 00:20:11.786 "supported_io_types": { 00:20:11.786 "read": true, 00:20:11.786 "write": true, 00:20:11.786 "unmap": true, 00:20:11.786 "flush": true, 00:20:11.786 "reset": true, 00:20:11.786 "nvme_admin": false, 00:20:11.787 "nvme_io": false, 00:20:11.787 "nvme_io_md": false, 00:20:11.787 "write_zeroes": true, 00:20:11.787 "zcopy": true, 00:20:11.787 "get_zone_info": false, 00:20:11.787 "zone_management": false, 00:20:11.787 "zone_append": false, 00:20:11.787 "compare": false, 00:20:11.787 "compare_and_write": false, 00:20:11.787 "abort": true, 00:20:11.787 "seek_hole": false, 00:20:11.787 "seek_data": false, 00:20:11.787 "copy": true, 00:20:11.787 "nvme_iov_md": false 00:20:11.787 }, 00:20:11.787 "memory_domains": [ 00:20:11.787 { 00:20:11.787 "dma_device_id": "system", 00:20:11.787 "dma_device_type": 1 00:20:11.787 }, 00:20:11.787 { 00:20:11.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.787 "dma_device_type": 2 00:20:11.787 } 00:20:11.787 ], 00:20:11.787 "driver_specific": { 00:20:11.787 "passthru": { 00:20:11.787 "name": "pt1", 00:20:11.787 "base_bdev_name": "malloc1" 00:20:11.787 } 00:20:11.787 } 00:20:11.787 }' 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.787 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:12.045 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.045 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.045 "name": "pt2", 00:20:12.045 "aliases": [ 00:20:12.045 "00000000-0000-0000-0000-000000000002" 00:20:12.045 ], 00:20:12.045 "product_name": "passthru", 00:20:12.045 "block_size": 512, 00:20:12.045 "num_blocks": 65536, 00:20:12.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.045 "assigned_rate_limits": { 00:20:12.045 "rw_ios_per_sec": 0, 00:20:12.045 "rw_mbytes_per_sec": 0, 00:20:12.045 "r_mbytes_per_sec": 0, 00:20:12.045 "w_mbytes_per_sec": 0 00:20:12.045 }, 00:20:12.045 "claimed": true, 00:20:12.045 "claim_type": "exclusive_write", 00:20:12.045 "zoned": false, 00:20:12.045 "supported_io_types": { 00:20:12.045 "read": true, 00:20:12.045 "write": true, 00:20:12.045 "unmap": true, 00:20:12.045 "flush": true, 00:20:12.045 "reset": true, 00:20:12.045 "nvme_admin": false, 00:20:12.045 "nvme_io": false, 00:20:12.045 "nvme_io_md": false, 00:20:12.045 "write_zeroes": true, 00:20:12.045 "zcopy": true, 00:20:12.045 "get_zone_info": false, 00:20:12.045 "zone_management": false, 00:20:12.045 "zone_append": false, 00:20:12.045 "compare": false, 00:20:12.045 "compare_and_write": false, 00:20:12.045 "abort": true, 00:20:12.045 "seek_hole": false, 00:20:12.045 "seek_data": false, 00:20:12.045 "copy": true, 00:20:12.045 "nvme_iov_md": false 00:20:12.045 }, 00:20:12.045 "memory_domains": [ 00:20:12.045 { 00:20:12.045 "dma_device_id": "system", 00:20:12.045 "dma_device_type": 1 00:20:12.045 }, 00:20:12.045 { 00:20:12.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.045 "dma_device_type": 2 00:20:12.045 } 00:20:12.045 ], 00:20:12.045 "driver_specific": { 00:20:12.045 "passthru": { 00:20:12.045 "name": "pt2", 00:20:12.045 "base_bdev_name": "malloc2" 00:20:12.045 } 00:20:12.045 } 00:20:12.045 }' 00:20:12.045 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.045 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.045 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.045 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.045 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:12.304 [2024-07-15 09:46:40.379594] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 1f99b856-428f-11ef-a0af-c98d8ee52a94 '!=' 1f99b856-428f-11ef-a0af-c98d8ee52a94 ']' 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:12.304 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:12.563 [2024-07-15 09:46:40.583640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.563 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.821 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.821 "name": "raid_bdev1", 00:20:12.821 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:12.821 "strip_size_kb": 0, 00:20:12.821 "state": "online", 00:20:12.821 "raid_level": "raid1", 00:20:12.821 "superblock": true, 00:20:12.821 "num_base_bdevs": 2, 00:20:12.821 "num_base_bdevs_discovered": 1, 00:20:12.821 "num_base_bdevs_operational": 1, 00:20:12.821 "base_bdevs_list": [ 00:20:12.821 { 00:20:12.821 "name": null, 00:20:12.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.821 "is_configured": false, 00:20:12.821 "data_offset": 2048, 00:20:12.821 "data_size": 63488 00:20:12.821 }, 00:20:12.821 { 00:20:12.821 "name": "pt2", 00:20:12.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.821 "is_configured": true, 00:20:12.821 "data_offset": 2048, 00:20:12.821 "data_size": 63488 00:20:12.821 } 00:20:12.821 ] 00:20:12.821 }' 00:20:12.821 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.821 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.081 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:13.340 [2024-07-15 09:46:41.263829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.340 [2024-07-15 09:46:41.263856] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.340 [2024-07-15 09:46:41.263868] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.340 [2024-07-15 09:46:41.263877] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.340 [2024-07-15 09:46:41.263881] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225a7fe35180 name raid_bdev1, state offline 00:20:13.340 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.340 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:20:13.599 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.858 [2024-07-15 09:46:41.855995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.858 [2024-07-15 09:46:41.856053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.858 [2024-07-15 09:46:41.856062] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225a7fe34f00 00:20:13.858 [2024-07-15 09:46:41.856068] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.858 [2024-07-15 09:46:41.856823] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.858 [2024-07-15 09:46:41.856855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.858 [2024-07-15 09:46:41.856876] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:13.858 [2024-07-15 09:46:41.856887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.858 [2024-07-15 09:46:41.856906] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x225a7fe35180 00:20:13.858 [2024-07-15 09:46:41.856910] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:13.858 [2024-07-15 09:46:41.856928] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225a7fe97e20 00:20:13.859 [2024-07-15 09:46:41.856965] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x225a7fe35180 00:20:13.859 [2024-07-15 09:46:41.856969] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x225a7fe35180 00:20:13.859 [2024-07-15 09:46:41.856984] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.859 pt2 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.859 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.117 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.117 "name": "raid_bdev1", 00:20:14.117 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:14.117 "strip_size_kb": 0, 00:20:14.117 "state": "online", 00:20:14.117 "raid_level": "raid1", 00:20:14.117 "superblock": true, 00:20:14.117 "num_base_bdevs": 2, 00:20:14.117 "num_base_bdevs_discovered": 1, 00:20:14.117 "num_base_bdevs_operational": 1, 00:20:14.117 "base_bdevs_list": [ 00:20:14.117 { 00:20:14.117 "name": null, 00:20:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.117 "is_configured": false, 00:20:14.117 "data_offset": 2048, 00:20:14.117 "data_size": 63488 00:20:14.117 }, 00:20:14.117 { 00:20:14.117 "name": "pt2", 00:20:14.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:14.118 "is_configured": true, 00:20:14.118 "data_offset": 2048, 00:20:14.118 "data_size": 63488 00:20:14.118 } 00:20:14.118 ] 00:20:14.118 }' 00:20:14.118 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.118 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.377 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:14.636 [2024-07-15 09:46:42.580195] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.636 [2024-07-15 09:46:42.580219] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.636 [2024-07-15 09:46:42.580233] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.636 [2024-07-15 09:46:42.580242] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.636 [2024-07-15 09:46:42.580246] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225a7fe35180 name raid_bdev1, state offline 00:20:14.636 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.636 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:14.895 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:14.895 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:14.895 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:20:14.895 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:14.895 [2024-07-15 09:46:42.980312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:14.895 [2024-07-15 09:46:42.980365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.895 [2024-07-15 09:46:42.980375] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225a7fe34c80 00:20:14.895 [2024-07-15 09:46:42.980381] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.895 [2024-07-15 09:46:42.981113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.895 [2024-07-15 09:46:42.981139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:14.895 [2024-07-15 09:46:42.981159] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:14.895 [2024-07-15 09:46:42.981168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:14.895 [2024-07-15 09:46:42.981192] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:14.895 [2024-07-15 09:46:42.981195] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.895 [2024-07-15 09:46:42.981199] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225a7fe34780 name raid_bdev1, state configuring 00:20:14.895 [2024-07-15 09:46:42.981206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:14.895 [2024-07-15 09:46:42.981216] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x225a7fe34780 00:20:14.895 [2024-07-15 09:46:42.981219] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:14.895 [2024-07-15 09:46:42.981237] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225a7fe97e20 00:20:14.895 [2024-07-15 09:46:42.981269] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x225a7fe34780 00:20:14.895 [2024-07-15 09:46:42.981272] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x225a7fe34780 00:20:14.895 [2024-07-15 09:46:42.981287] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.895 pt1 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.154 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.154 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.154 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.154 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.154 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.154 "name": "raid_bdev1", 00:20:15.154 "uuid": "1f99b856-428f-11ef-a0af-c98d8ee52a94", 00:20:15.154 "strip_size_kb": 0, 00:20:15.154 "state": "online", 00:20:15.154 "raid_level": "raid1", 00:20:15.154 "superblock": true, 00:20:15.154 "num_base_bdevs": 2, 00:20:15.154 "num_base_bdevs_discovered": 1, 00:20:15.154 "num_base_bdevs_operational": 1, 00:20:15.154 "base_bdevs_list": [ 00:20:15.154 { 00:20:15.154 "name": null, 00:20:15.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.154 "is_configured": false, 00:20:15.154 "data_offset": 2048, 00:20:15.154 "data_size": 63488 00:20:15.154 }, 00:20:15.154 { 00:20:15.154 "name": "pt2", 00:20:15.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.154 "is_configured": true, 00:20:15.154 "data_offset": 2048, 00:20:15.154 "data_size": 63488 00:20:15.154 } 00:20:15.154 ] 00:20:15.154 }' 00:20:15.154 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.154 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.413 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:15.413 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:15.672 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:15.672 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:15.672 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:15.931 [2024-07-15 09:46:43.888389] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 1f99b856-428f-11ef-a0af-c98d8ee52a94 '!=' 1f99b856-428f-11ef-a0af-c98d8ee52a94 ']' 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51301 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51301 ']' 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51301 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51301 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:20:15.931 killing process with pid 51301 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51301' 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51301 00:20:15.931 [2024-07-15 09:46:43.920334] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:15.931 [2024-07-15 09:46:43.920351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.931 [2024-07-15 09:46:43.920360] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.931 [2024-07-15 09:46:43.920363] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x225a7fe34780 name raid_bdev1, state offline 00:20:15.931 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51301 00:20:15.931 [2024-07-15 09:46:43.937784] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.190 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:16.190 00:20:16.190 real 0m11.135s 00:20:16.191 user 0m19.058s 00:20:16.191 sys 0m2.471s 00:20:16.191 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.191 ************************************ 00:20:16.191 END TEST raid_superblock_test 00:20:16.191 ************************************ 00:20:16.191 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.191 09:46:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:16.191 09:46:44 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:20:16.191 09:46:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:16.191 09:46:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.191 09:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.191 ************************************ 00:20:16.191 START TEST raid_read_error_test 00:20:16.191 ************************************ 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5tOJFz24Yp 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51682 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51682 /var/tmp/spdk-raid.sock 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51682 ']' 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.191 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.191 [2024-07-15 09:46:44.265517] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:16.191 [2024-07-15 09:46:44.265760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:17.153 EAL: TSC is not safe to use in SMP mode 00:20:17.153 EAL: TSC is not invariant 00:20:17.153 [2024-07-15 09:46:44.972941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.153 [2024-07-15 09:46:45.080425] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:17.153 [2024-07-15 09:46:45.082953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.153 [2024-07-15 09:46:45.083703] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.153 [2024-07-15 09:46:45.083716] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.153 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.153 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:17.153 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:17.153 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:17.412 BaseBdev1_malloc 00:20:17.412 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:17.672 true 00:20:17.672 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:17.931 [2024-07-15 09:46:45.847149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:17.931 [2024-07-15 09:46:45.847214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.931 [2024-07-15 09:46:45.847246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27880ec34780 00:20:17.931 [2024-07-15 09:46:45.847253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.931 [2024-07-15 09:46:45.847919] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.931 [2024-07-15 09:46:45.847946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:17.931 BaseBdev1 00:20:17.931 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:17.931 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:18.191 BaseBdev2_malloc 00:20:18.191 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:18.191 true 00:20:18.191 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:18.450 [2024-07-15 09:46:46.431289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:18.450 [2024-07-15 09:46:46.431342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.450 [2024-07-15 09:46:46.431369] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27880ec34c80 00:20:18.450 [2024-07-15 09:46:46.431376] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.450 [2024-07-15 09:46:46.432021] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.450 [2024-07-15 09:46:46.432053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:18.450 BaseBdev2 00:20:18.450 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:20:18.708 [2024-07-15 09:46:46.647356] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.708 [2024-07-15 09:46:46.647951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.708 [2024-07-15 09:46:46.648008] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27880ec34f00 00:20:18.708 [2024-07-15 09:46:46.648013] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:18.708 [2024-07-15 09:46:46.648045] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27880eca0e20 00:20:18.708 [2024-07-15 09:46:46.648111] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27880ec34f00 00:20:18.709 [2024-07-15 09:46:46.648114] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27880ec34f00 00:20:18.709 [2024-07-15 09:46:46.648133] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.709 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.968 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.968 "name": "raid_bdev1", 00:20:18.968 "uuid": "268f8dea-428f-11ef-a0af-c98d8ee52a94", 00:20:18.968 "strip_size_kb": 0, 00:20:18.968 "state": "online", 00:20:18.968 "raid_level": "raid1", 00:20:18.968 "superblock": true, 00:20:18.968 "num_base_bdevs": 2, 00:20:18.968 "num_base_bdevs_discovered": 2, 00:20:18.968 "num_base_bdevs_operational": 2, 00:20:18.968 "base_bdevs_list": [ 00:20:18.968 { 00:20:18.968 "name": "BaseBdev1", 00:20:18.968 "uuid": "8100607d-0d87-c65b-8bfc-9bbbe50586ca", 00:20:18.968 "is_configured": true, 00:20:18.968 "data_offset": 2048, 00:20:18.968 "data_size": 63488 00:20:18.968 }, 00:20:18.968 { 00:20:18.968 "name": "BaseBdev2", 00:20:18.968 "uuid": "fd1e627d-00dc-5b5c-acce-4dd5f5d2cc91", 00:20:18.968 "is_configured": true, 00:20:18.968 "data_offset": 2048, 00:20:18.968 "data_size": 63488 00:20:18.968 } 00:20:18.968 ] 00:20:18.968 }' 00:20:18.968 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.968 09:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.228 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:19.228 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:19.228 [2024-07-15 09:46:47.263573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27880eca0ec0 00:20:20.165 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.424 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.683 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.683 "name": "raid_bdev1", 00:20:20.683 "uuid": "268f8dea-428f-11ef-a0af-c98d8ee52a94", 00:20:20.683 "strip_size_kb": 0, 00:20:20.683 "state": "online", 00:20:20.683 "raid_level": "raid1", 00:20:20.683 "superblock": true, 00:20:20.683 "num_base_bdevs": 2, 00:20:20.683 "num_base_bdevs_discovered": 2, 00:20:20.683 "num_base_bdevs_operational": 2, 00:20:20.683 "base_bdevs_list": [ 00:20:20.683 { 00:20:20.683 "name": "BaseBdev1", 00:20:20.683 "uuid": "8100607d-0d87-c65b-8bfc-9bbbe50586ca", 00:20:20.683 "is_configured": true, 00:20:20.683 "data_offset": 2048, 00:20:20.683 "data_size": 63488 00:20:20.683 }, 00:20:20.683 { 00:20:20.683 "name": "BaseBdev2", 00:20:20.683 "uuid": "fd1e627d-00dc-5b5c-acce-4dd5f5d2cc91", 00:20:20.683 "is_configured": true, 00:20:20.683 "data_offset": 2048, 00:20:20.683 "data_size": 63488 00:20:20.683 } 00:20:20.683 ] 00:20:20.683 }' 00:20:20.683 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.683 09:46:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.942 09:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:21.201 [2024-07-15 09:46:49.154612] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.201 [2024-07-15 09:46:49.154647] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.201 [2024-07-15 09:46:49.154982] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.201 [2024-07-15 09:46:49.154990] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.201 [2024-07-15 09:46:49.155006] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.201 [2024-07-15 09:46:49.155010] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27880ec34f00 name raid_bdev1, state offline 00:20:21.201 0 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51682 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51682 ']' 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51682 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51682 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:20:21.201 killing process with pid 51682 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51682' 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51682 00:20:21.201 [2024-07-15 09:46:49.188512] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.201 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51682 00:20:21.201 [2024-07-15 09:46:49.206468] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5tOJFz24Yp 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:21.460 00:20:21.460 real 0m5.226s 00:20:21.460 user 0m7.437s 00:20:21.460 sys 0m1.205s 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.460 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.460 ************************************ 00:20:21.460 END TEST raid_read_error_test 00:20:21.460 ************************************ 00:20:21.460 09:46:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:21.460 09:46:49 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:20:21.460 09:46:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:21.460 09:46:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.460 09:46:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.460 ************************************ 00:20:21.460 START TEST raid_write_error_test 00:20:21.460 ************************************ 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.460 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jNojMyuYnb 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51806 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51806 /var/tmp/spdk-raid.sock 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51806 ']' 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.461 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.461 [2024-07-15 09:46:49.545760] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:21.461 [2024-07-15 09:46:49.546026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:22.398 EAL: TSC is not safe to use in SMP mode 00:20:22.398 EAL: TSC is not invariant 00:20:22.398 [2024-07-15 09:46:50.266231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.398 [2024-07-15 09:46:50.382244] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:22.398 [2024-07-15 09:46:50.384666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.398 [2024-07-15 09:46:50.385464] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.398 [2024-07-15 09:46:50.385475] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.398 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.398 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:22.398 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:22.398 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.656 BaseBdev1_malloc 00:20:22.656 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:22.915 true 00:20:22.915 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:23.174 [2024-07-15 09:46:51.040626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:23.174 [2024-07-15 09:46:51.040691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.174 [2024-07-15 09:46:51.040721] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66e05234780 00:20:23.174 [2024-07-15 09:46:51.040728] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.174 [2024-07-15 09:46:51.041418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.174 [2024-07-15 09:46:51.041444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:23.174 BaseBdev1 00:20:23.174 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:23.174 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:23.174 BaseBdev2_malloc 00:20:23.174 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:23.433 true 00:20:23.433 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:23.690 [2024-07-15 09:46:51.620743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:23.691 [2024-07-15 09:46:51.620803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.691 [2024-07-15 09:46:51.620830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66e05234c80 00:20:23.691 [2024-07-15 09:46:51.620836] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.691 [2024-07-15 09:46:51.621486] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.691 [2024-07-15 09:46:51.621515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:23.691 BaseBdev2 00:20:23.691 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:20:23.950 [2024-07-15 09:46:51.824801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.950 [2024-07-15 09:46:51.825412] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.950 [2024-07-15 09:46:51.825472] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x66e05234f00 00:20:23.950 [2024-07-15 09:46:51.825477] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:23.950 [2024-07-15 09:46:51.825509] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x66e052a0e20 00:20:23.950 [2024-07-15 09:46:51.825578] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x66e05234f00 00:20:23.950 [2024-07-15 09:46:51.825581] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x66e05234f00 00:20:23.950 [2024-07-15 09:46:51.825598] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.950 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.950 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.950 "name": "raid_bdev1", 00:20:23.950 "uuid": "29a5920a-428f-11ef-a0af-c98d8ee52a94", 00:20:23.950 "strip_size_kb": 0, 00:20:23.950 "state": "online", 00:20:23.950 "raid_level": "raid1", 00:20:23.950 "superblock": true, 00:20:23.950 "num_base_bdevs": 2, 00:20:23.950 "num_base_bdevs_discovered": 2, 00:20:23.950 "num_base_bdevs_operational": 2, 00:20:23.950 "base_bdevs_list": [ 00:20:23.950 { 00:20:23.950 "name": "BaseBdev1", 00:20:23.950 "uuid": "26b32ce2-1726-745b-8cc4-641e89d4b4fc", 00:20:23.950 "is_configured": true, 00:20:23.950 "data_offset": 2048, 00:20:23.950 "data_size": 63488 00:20:23.950 }, 00:20:23.950 { 00:20:23.950 "name": "BaseBdev2", 00:20:23.950 "uuid": "5bb51129-4d2b-f35d-8ee3-4573025f57a6", 00:20:23.950 "is_configured": true, 00:20:23.950 "data_offset": 2048, 00:20:23.950 "data_size": 63488 00:20:23.950 } 00:20:23.950 ] 00:20:23.950 }' 00:20:23.950 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.950 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.210 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:24.210 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:24.469 [2024-07-15 09:46:52.397001] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x66e052a0ec0 00:20:25.465 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:25.465 [2024-07-15 09:46:53.564765] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:25.465 [2024-07-15 09:46:53.564829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:25.465 [2024-07-15 09:46:53.564955] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x66e052a0ec0 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.724 "name": "raid_bdev1", 00:20:25.724 "uuid": "29a5920a-428f-11ef-a0af-c98d8ee52a94", 00:20:25.724 "strip_size_kb": 0, 00:20:25.724 "state": "online", 00:20:25.724 "raid_level": "raid1", 00:20:25.724 "superblock": true, 00:20:25.724 "num_base_bdevs": 2, 00:20:25.724 "num_base_bdevs_discovered": 1, 00:20:25.724 "num_base_bdevs_operational": 1, 00:20:25.724 "base_bdevs_list": [ 00:20:25.724 { 00:20:25.724 "name": null, 00:20:25.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.724 "is_configured": false, 00:20:25.724 "data_offset": 2048, 00:20:25.724 "data_size": 63488 00:20:25.724 }, 00:20:25.724 { 00:20:25.724 "name": "BaseBdev2", 00:20:25.724 "uuid": "5bb51129-4d2b-f35d-8ee3-4573025f57a6", 00:20:25.724 "is_configured": true, 00:20:25.724 "data_offset": 2048, 00:20:25.724 "data_size": 63488 00:20:25.724 } 00:20:25.724 ] 00:20:25.724 }' 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.724 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:26.289 [2024-07-15 09:46:54.302371] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.289 [2024-07-15 09:46:54.302407] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.289 [2024-07-15 09:46:54.302719] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.289 [2024-07-15 09:46:54.302728] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.289 [2024-07-15 09:46:54.302738] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.289 [2024-07-15 09:46:54.302742] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x66e05234f00 name raid_bdev1, state offline 00:20:26.289 0 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51806 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51806 ']' 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51806 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51806 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:20:26.289 killing process with pid 51806 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51806' 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51806 00:20:26.289 [2024-07-15 09:46:54.331859] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.289 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51806 00:20:26.289 [2024-07-15 09:46:54.348302] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jNojMyuYnb 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:26.547 00:20:26.547 real 0m5.080s 00:20:26.547 user 0m7.191s 00:20:26.547 sys 0m1.171s 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.547 ************************************ 00:20:26.547 END TEST raid_write_error_test 00:20:26.547 ************************************ 00:20:26.547 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.547 09:46:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:26.547 09:46:54 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:20:26.547 09:46:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:26.547 09:46:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:20:26.547 09:46:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:26.547 09:46:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.547 09:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.806 ************************************ 00:20:26.806 START TEST raid_state_function_test 00:20:26.806 ************************************ 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51928 00:20:26.806 Process raid pid: 51928 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51928' 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51928 /var/tmp/spdk-raid.sock 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51928 ']' 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:26.806 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.806 [2024-07-15 09:46:54.672841] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:26.806 [2024-07-15 09:46:54.673162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:27.375 EAL: TSC is not safe to use in SMP mode 00:20:27.375 EAL: TSC is not invariant 00:20:27.375 [2024-07-15 09:46:55.398100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.634 [2024-07-15 09:46:55.513735] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:27.634 [2024-07-15 09:46:55.516197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.634 [2024-07-15 09:46:55.516944] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.634 [2024-07-15 09:46:55.516955] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.634 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.634 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:27.634 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:27.892 [2024-07-15 09:46:55.771824] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.892 [2024-07-15 09:46:55.771882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.892 [2024-07-15 09:46:55.771886] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.892 [2024-07-15 09:46:55.771893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.892 [2024-07-15 09:46:55.771896] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.892 [2024-07-15 09:46:55.771902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.892 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:27.892 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.892 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.892 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:27.892 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.892 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.893 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.893 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.893 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.893 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.893 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.893 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.151 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.151 "name": "Existed_Raid", 00:20:28.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.151 "strip_size_kb": 64, 00:20:28.151 "state": "configuring", 00:20:28.151 "raid_level": "raid0", 00:20:28.151 "superblock": false, 00:20:28.151 "num_base_bdevs": 3, 00:20:28.151 "num_base_bdevs_discovered": 0, 00:20:28.151 "num_base_bdevs_operational": 3, 00:20:28.151 "base_bdevs_list": [ 00:20:28.151 { 00:20:28.151 "name": "BaseBdev1", 00:20:28.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.151 "is_configured": false, 00:20:28.151 "data_offset": 0, 00:20:28.151 "data_size": 0 00:20:28.151 }, 00:20:28.151 { 00:20:28.151 "name": "BaseBdev2", 00:20:28.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.151 "is_configured": false, 00:20:28.151 "data_offset": 0, 00:20:28.151 "data_size": 0 00:20:28.151 }, 00:20:28.151 { 00:20:28.151 "name": "BaseBdev3", 00:20:28.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.151 "is_configured": false, 00:20:28.151 "data_offset": 0, 00:20:28.151 "data_size": 0 00:20:28.151 } 00:20:28.151 ] 00:20:28.151 }' 00:20:28.151 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.151 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.410 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:28.410 [2024-07-15 09:46:56.487937] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.410 [2024-07-15 09:46:56.487971] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edd7634500 name Existed_Raid, state configuring 00:20:28.410 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:28.668 [2024-07-15 09:46:56.679989] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:28.668 [2024-07-15 09:46:56.680045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:28.668 [2024-07-15 09:46:56.680049] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.668 [2024-07-15 09:46:56.680056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.668 [2024-07-15 09:46:56.680059] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.668 [2024-07-15 09:46:56.680065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.668 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.927 [2024-07-15 09:46:56.957182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.927 BaseBdev1 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.927 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.187 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:29.446 [ 00:20:29.446 { 00:20:29.446 "name": "BaseBdev1", 00:20:29.446 "aliases": [ 00:20:29.446 "2cb489dd-428f-11ef-a0af-c98d8ee52a94" 00:20:29.446 ], 00:20:29.446 "product_name": "Malloc disk", 00:20:29.446 "block_size": 512, 00:20:29.446 "num_blocks": 65536, 00:20:29.446 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:29.446 "assigned_rate_limits": { 00:20:29.446 "rw_ios_per_sec": 0, 00:20:29.446 "rw_mbytes_per_sec": 0, 00:20:29.446 "r_mbytes_per_sec": 0, 00:20:29.446 "w_mbytes_per_sec": 0 00:20:29.446 }, 00:20:29.446 "claimed": true, 00:20:29.446 "claim_type": "exclusive_write", 00:20:29.446 "zoned": false, 00:20:29.446 "supported_io_types": { 00:20:29.446 "read": true, 00:20:29.446 "write": true, 00:20:29.447 "unmap": true, 00:20:29.447 "flush": true, 00:20:29.447 "reset": true, 00:20:29.447 "nvme_admin": false, 00:20:29.447 "nvme_io": false, 00:20:29.447 "nvme_io_md": false, 00:20:29.447 "write_zeroes": true, 00:20:29.447 "zcopy": true, 00:20:29.447 "get_zone_info": false, 00:20:29.447 "zone_management": false, 00:20:29.447 "zone_append": false, 00:20:29.447 "compare": false, 00:20:29.447 "compare_and_write": false, 00:20:29.447 "abort": true, 00:20:29.447 "seek_hole": false, 00:20:29.447 "seek_data": false, 00:20:29.447 "copy": true, 00:20:29.447 "nvme_iov_md": false 00:20:29.447 }, 00:20:29.447 "memory_domains": [ 00:20:29.447 { 00:20:29.447 "dma_device_id": "system", 00:20:29.447 "dma_device_type": 1 00:20:29.447 }, 00:20:29.447 { 00:20:29.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.447 "dma_device_type": 2 00:20:29.447 } 00:20:29.447 ], 00:20:29.447 "driver_specific": {} 00:20:29.447 } 00:20:29.447 ] 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.447 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.707 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.707 "name": "Existed_Raid", 00:20:29.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.707 "strip_size_kb": 64, 00:20:29.707 "state": "configuring", 00:20:29.707 "raid_level": "raid0", 00:20:29.707 "superblock": false, 00:20:29.707 "num_base_bdevs": 3, 00:20:29.707 "num_base_bdevs_discovered": 1, 00:20:29.707 "num_base_bdevs_operational": 3, 00:20:29.707 "base_bdevs_list": [ 00:20:29.707 { 00:20:29.707 "name": "BaseBdev1", 00:20:29.707 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:29.707 "is_configured": true, 00:20:29.707 "data_offset": 0, 00:20:29.707 "data_size": 65536 00:20:29.707 }, 00:20:29.707 { 00:20:29.707 "name": "BaseBdev2", 00:20:29.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.707 "is_configured": false, 00:20:29.707 "data_offset": 0, 00:20:29.707 "data_size": 0 00:20:29.707 }, 00:20:29.707 { 00:20:29.707 "name": "BaseBdev3", 00:20:29.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.707 "is_configured": false, 00:20:29.707 "data_offset": 0, 00:20:29.707 "data_size": 0 00:20:29.707 } 00:20:29.707 ] 00:20:29.707 }' 00:20:29.707 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.707 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.966 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:30.226 [2024-07-15 09:46:58.136257] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:30.226 [2024-07-15 09:46:58.136292] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edd7634500 name Existed_Raid, state configuring 00:20:30.226 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:30.486 [2024-07-15 09:46:58.348307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.486 [2024-07-15 09:46:58.349194] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.486 [2024-07-15 09:46:58.349241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.486 [2024-07-15 09:46:58.349245] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.486 [2024-07-15 09:46:58.349252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.486 "name": "Existed_Raid", 00:20:30.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.486 "strip_size_kb": 64, 00:20:30.486 "state": "configuring", 00:20:30.486 "raid_level": "raid0", 00:20:30.486 "superblock": false, 00:20:30.486 "num_base_bdevs": 3, 00:20:30.486 "num_base_bdevs_discovered": 1, 00:20:30.486 "num_base_bdevs_operational": 3, 00:20:30.486 "base_bdevs_list": [ 00:20:30.486 { 00:20:30.486 "name": "BaseBdev1", 00:20:30.486 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:30.486 "is_configured": true, 00:20:30.486 "data_offset": 0, 00:20:30.486 "data_size": 65536 00:20:30.486 }, 00:20:30.486 { 00:20:30.486 "name": "BaseBdev2", 00:20:30.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.486 "is_configured": false, 00:20:30.486 "data_offset": 0, 00:20:30.486 "data_size": 0 00:20:30.486 }, 00:20:30.486 { 00:20:30.486 "name": "BaseBdev3", 00:20:30.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.486 "is_configured": false, 00:20:30.486 "data_offset": 0, 00:20:30.486 "data_size": 0 00:20:30.486 } 00:20:30.486 ] 00:20:30.486 }' 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.486 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.053 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:31.053 [2024-07-15 09:46:59.052573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.053 BaseBdev2 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:31.053 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.312 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:31.570 [ 00:20:31.570 { 00:20:31.570 "name": "BaseBdev2", 00:20:31.570 "aliases": [ 00:20:31.570 "2df46c12-428f-11ef-a0af-c98d8ee52a94" 00:20:31.570 ], 00:20:31.570 "product_name": "Malloc disk", 00:20:31.570 "block_size": 512, 00:20:31.570 "num_blocks": 65536, 00:20:31.570 "uuid": "2df46c12-428f-11ef-a0af-c98d8ee52a94", 00:20:31.570 "assigned_rate_limits": { 00:20:31.570 "rw_ios_per_sec": 0, 00:20:31.570 "rw_mbytes_per_sec": 0, 00:20:31.570 "r_mbytes_per_sec": 0, 00:20:31.570 "w_mbytes_per_sec": 0 00:20:31.570 }, 00:20:31.570 "claimed": true, 00:20:31.570 "claim_type": "exclusive_write", 00:20:31.570 "zoned": false, 00:20:31.570 "supported_io_types": { 00:20:31.570 "read": true, 00:20:31.570 "write": true, 00:20:31.570 "unmap": true, 00:20:31.570 "flush": true, 00:20:31.570 "reset": true, 00:20:31.570 "nvme_admin": false, 00:20:31.570 "nvme_io": false, 00:20:31.570 "nvme_io_md": false, 00:20:31.570 "write_zeroes": true, 00:20:31.570 "zcopy": true, 00:20:31.570 "get_zone_info": false, 00:20:31.570 "zone_management": false, 00:20:31.570 "zone_append": false, 00:20:31.570 "compare": false, 00:20:31.570 "compare_and_write": false, 00:20:31.570 "abort": true, 00:20:31.570 "seek_hole": false, 00:20:31.570 "seek_data": false, 00:20:31.570 "copy": true, 00:20:31.570 "nvme_iov_md": false 00:20:31.570 }, 00:20:31.570 "memory_domains": [ 00:20:31.570 { 00:20:31.570 "dma_device_id": "system", 00:20:31.570 "dma_device_type": 1 00:20:31.570 }, 00:20:31.570 { 00:20:31.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.570 "dma_device_type": 2 00:20:31.570 } 00:20:31.570 ], 00:20:31.570 "driver_specific": {} 00:20:31.570 } 00:20:31.570 ] 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.570 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.829 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.829 "name": "Existed_Raid", 00:20:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.829 "strip_size_kb": 64, 00:20:31.829 "state": "configuring", 00:20:31.829 "raid_level": "raid0", 00:20:31.829 "superblock": false, 00:20:31.829 "num_base_bdevs": 3, 00:20:31.829 "num_base_bdevs_discovered": 2, 00:20:31.829 "num_base_bdevs_operational": 3, 00:20:31.829 "base_bdevs_list": [ 00:20:31.829 { 00:20:31.829 "name": "BaseBdev1", 00:20:31.829 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:31.829 "is_configured": true, 00:20:31.829 "data_offset": 0, 00:20:31.829 "data_size": 65536 00:20:31.829 }, 00:20:31.829 { 00:20:31.829 "name": "BaseBdev2", 00:20:31.829 "uuid": "2df46c12-428f-11ef-a0af-c98d8ee52a94", 00:20:31.829 "is_configured": true, 00:20:31.829 "data_offset": 0, 00:20:31.829 "data_size": 65536 00:20:31.829 }, 00:20:31.829 { 00:20:31.829 "name": "BaseBdev3", 00:20:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.829 "is_configured": false, 00:20:31.829 "data_offset": 0, 00:20:31.829 "data_size": 0 00:20:31.829 } 00:20:31.829 ] 00:20:31.829 }' 00:20:31.829 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.829 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.087 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:32.346 [2024-07-15 09:47:00.188824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.347 [2024-07-15 09:47:00.188858] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2edd7634a00 00:20:32.347 [2024-07-15 09:47:00.188862] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:32.347 [2024-07-15 09:47:00.188883] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2edd7697e20 00:20:32.347 [2024-07-15 09:47:00.188986] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2edd7634a00 00:20:32.347 [2024-07-15 09:47:00.188990] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2edd7634a00 00:20:32.347 [2024-07-15 09:47:00.189020] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.347 BaseBdev3 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:32.347 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:32.606 [ 00:20:32.606 { 00:20:32.606 "name": "BaseBdev3", 00:20:32.606 "aliases": [ 00:20:32.606 "2ea1ccff-428f-11ef-a0af-c98d8ee52a94" 00:20:32.606 ], 00:20:32.606 "product_name": "Malloc disk", 00:20:32.606 "block_size": 512, 00:20:32.606 "num_blocks": 65536, 00:20:32.606 "uuid": "2ea1ccff-428f-11ef-a0af-c98d8ee52a94", 00:20:32.606 "assigned_rate_limits": { 00:20:32.606 "rw_ios_per_sec": 0, 00:20:32.606 "rw_mbytes_per_sec": 0, 00:20:32.606 "r_mbytes_per_sec": 0, 00:20:32.606 "w_mbytes_per_sec": 0 00:20:32.606 }, 00:20:32.606 "claimed": true, 00:20:32.606 "claim_type": "exclusive_write", 00:20:32.606 "zoned": false, 00:20:32.606 "supported_io_types": { 00:20:32.606 "read": true, 00:20:32.606 "write": true, 00:20:32.606 "unmap": true, 00:20:32.606 "flush": true, 00:20:32.606 "reset": true, 00:20:32.606 "nvme_admin": false, 00:20:32.606 "nvme_io": false, 00:20:32.606 "nvme_io_md": false, 00:20:32.606 "write_zeroes": true, 00:20:32.606 "zcopy": true, 00:20:32.606 "get_zone_info": false, 00:20:32.606 "zone_management": false, 00:20:32.606 "zone_append": false, 00:20:32.606 "compare": false, 00:20:32.606 "compare_and_write": false, 00:20:32.606 "abort": true, 00:20:32.606 "seek_hole": false, 00:20:32.606 "seek_data": false, 00:20:32.606 "copy": true, 00:20:32.606 "nvme_iov_md": false 00:20:32.606 }, 00:20:32.606 "memory_domains": [ 00:20:32.606 { 00:20:32.606 "dma_device_id": "system", 00:20:32.606 "dma_device_type": 1 00:20:32.606 }, 00:20:32.606 { 00:20:32.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.606 "dma_device_type": 2 00:20:32.606 } 00:20:32.606 ], 00:20:32.606 "driver_specific": {} 00:20:32.606 } 00:20:32.606 ] 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.606 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.864 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.864 "name": "Existed_Raid", 00:20:32.864 "uuid": "2ea1d3a4-428f-11ef-a0af-c98d8ee52a94", 00:20:32.864 "strip_size_kb": 64, 00:20:32.864 "state": "online", 00:20:32.864 "raid_level": "raid0", 00:20:32.864 "superblock": false, 00:20:32.864 "num_base_bdevs": 3, 00:20:32.864 "num_base_bdevs_discovered": 3, 00:20:32.864 "num_base_bdevs_operational": 3, 00:20:32.864 "base_bdevs_list": [ 00:20:32.864 { 00:20:32.864 "name": "BaseBdev1", 00:20:32.864 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:32.865 "is_configured": true, 00:20:32.865 "data_offset": 0, 00:20:32.865 "data_size": 65536 00:20:32.865 }, 00:20:32.865 { 00:20:32.865 "name": "BaseBdev2", 00:20:32.865 "uuid": "2df46c12-428f-11ef-a0af-c98d8ee52a94", 00:20:32.865 "is_configured": true, 00:20:32.865 "data_offset": 0, 00:20:32.865 "data_size": 65536 00:20:32.865 }, 00:20:32.865 { 00:20:32.865 "name": "BaseBdev3", 00:20:32.865 "uuid": "2ea1ccff-428f-11ef-a0af-c98d8ee52a94", 00:20:32.865 "is_configured": true, 00:20:32.865 "data_offset": 0, 00:20:32.865 "data_size": 65536 00:20:32.865 } 00:20:32.865 ] 00:20:32.865 }' 00:20:32.865 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.865 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:33.123 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:33.381 [2024-07-15 09:47:01.352939] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.381 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:33.381 "name": "Existed_Raid", 00:20:33.381 "aliases": [ 00:20:33.381 "2ea1d3a4-428f-11ef-a0af-c98d8ee52a94" 00:20:33.381 ], 00:20:33.381 "product_name": "Raid Volume", 00:20:33.381 "block_size": 512, 00:20:33.381 "num_blocks": 196608, 00:20:33.381 "uuid": "2ea1d3a4-428f-11ef-a0af-c98d8ee52a94", 00:20:33.381 "assigned_rate_limits": { 00:20:33.381 "rw_ios_per_sec": 0, 00:20:33.381 "rw_mbytes_per_sec": 0, 00:20:33.381 "r_mbytes_per_sec": 0, 00:20:33.381 "w_mbytes_per_sec": 0 00:20:33.381 }, 00:20:33.381 "claimed": false, 00:20:33.381 "zoned": false, 00:20:33.381 "supported_io_types": { 00:20:33.381 "read": true, 00:20:33.381 "write": true, 00:20:33.381 "unmap": true, 00:20:33.381 "flush": true, 00:20:33.381 "reset": true, 00:20:33.381 "nvme_admin": false, 00:20:33.381 "nvme_io": false, 00:20:33.381 "nvme_io_md": false, 00:20:33.381 "write_zeroes": true, 00:20:33.381 "zcopy": false, 00:20:33.381 "get_zone_info": false, 00:20:33.381 "zone_management": false, 00:20:33.381 "zone_append": false, 00:20:33.381 "compare": false, 00:20:33.381 "compare_and_write": false, 00:20:33.381 "abort": false, 00:20:33.381 "seek_hole": false, 00:20:33.381 "seek_data": false, 00:20:33.381 "copy": false, 00:20:33.381 "nvme_iov_md": false 00:20:33.381 }, 00:20:33.381 "memory_domains": [ 00:20:33.381 { 00:20:33.381 "dma_device_id": "system", 00:20:33.381 "dma_device_type": 1 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.381 "dma_device_type": 2 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "dma_device_id": "system", 00:20:33.381 "dma_device_type": 1 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.381 "dma_device_type": 2 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "dma_device_id": "system", 00:20:33.381 "dma_device_type": 1 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.381 "dma_device_type": 2 00:20:33.381 } 00:20:33.381 ], 00:20:33.381 "driver_specific": { 00:20:33.381 "raid": { 00:20:33.381 "uuid": "2ea1d3a4-428f-11ef-a0af-c98d8ee52a94", 00:20:33.381 "strip_size_kb": 64, 00:20:33.381 "state": "online", 00:20:33.381 "raid_level": "raid0", 00:20:33.381 "superblock": false, 00:20:33.381 "num_base_bdevs": 3, 00:20:33.381 "num_base_bdevs_discovered": 3, 00:20:33.381 "num_base_bdevs_operational": 3, 00:20:33.381 "base_bdevs_list": [ 00:20:33.381 { 00:20:33.381 "name": "BaseBdev1", 00:20:33.381 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:33.381 "is_configured": true, 00:20:33.381 "data_offset": 0, 00:20:33.381 "data_size": 65536 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "name": "BaseBdev2", 00:20:33.381 "uuid": "2df46c12-428f-11ef-a0af-c98d8ee52a94", 00:20:33.381 "is_configured": true, 00:20:33.381 "data_offset": 0, 00:20:33.381 "data_size": 65536 00:20:33.381 }, 00:20:33.381 { 00:20:33.381 "name": "BaseBdev3", 00:20:33.381 "uuid": "2ea1ccff-428f-11ef-a0af-c98d8ee52a94", 00:20:33.381 "is_configured": true, 00:20:33.381 "data_offset": 0, 00:20:33.381 "data_size": 65536 00:20:33.381 } 00:20:33.381 ] 00:20:33.381 } 00:20:33.381 } 00:20:33.381 }' 00:20:33.381 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:33.381 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:33.381 BaseBdev2 00:20:33.381 BaseBdev3' 00:20:33.381 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:33.381 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:33.381 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:33.640 "name": "BaseBdev1", 00:20:33.640 "aliases": [ 00:20:33.640 "2cb489dd-428f-11ef-a0af-c98d8ee52a94" 00:20:33.640 ], 00:20:33.640 "product_name": "Malloc disk", 00:20:33.640 "block_size": 512, 00:20:33.640 "num_blocks": 65536, 00:20:33.640 "uuid": "2cb489dd-428f-11ef-a0af-c98d8ee52a94", 00:20:33.640 "assigned_rate_limits": { 00:20:33.640 "rw_ios_per_sec": 0, 00:20:33.640 "rw_mbytes_per_sec": 0, 00:20:33.640 "r_mbytes_per_sec": 0, 00:20:33.640 "w_mbytes_per_sec": 0 00:20:33.640 }, 00:20:33.640 "claimed": true, 00:20:33.640 "claim_type": "exclusive_write", 00:20:33.640 "zoned": false, 00:20:33.640 "supported_io_types": { 00:20:33.640 "read": true, 00:20:33.640 "write": true, 00:20:33.640 "unmap": true, 00:20:33.640 "flush": true, 00:20:33.640 "reset": true, 00:20:33.640 "nvme_admin": false, 00:20:33.640 "nvme_io": false, 00:20:33.640 "nvme_io_md": false, 00:20:33.640 "write_zeroes": true, 00:20:33.640 "zcopy": true, 00:20:33.640 "get_zone_info": false, 00:20:33.640 "zone_management": false, 00:20:33.640 "zone_append": false, 00:20:33.640 "compare": false, 00:20:33.640 "compare_and_write": false, 00:20:33.640 "abort": true, 00:20:33.640 "seek_hole": false, 00:20:33.640 "seek_data": false, 00:20:33.640 "copy": true, 00:20:33.640 "nvme_iov_md": false 00:20:33.640 }, 00:20:33.640 "memory_domains": [ 00:20:33.640 { 00:20:33.640 "dma_device_id": "system", 00:20:33.640 "dma_device_type": 1 00:20:33.640 }, 00:20:33.640 { 00:20:33.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.640 "dma_device_type": 2 00:20:33.640 } 00:20:33.640 ], 00:20:33.640 "driver_specific": {} 00:20:33.640 }' 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:33.640 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:33.898 "name": "BaseBdev2", 00:20:33.898 "aliases": [ 00:20:33.898 "2df46c12-428f-11ef-a0af-c98d8ee52a94" 00:20:33.898 ], 00:20:33.898 "product_name": "Malloc disk", 00:20:33.898 "block_size": 512, 00:20:33.898 "num_blocks": 65536, 00:20:33.898 "uuid": "2df46c12-428f-11ef-a0af-c98d8ee52a94", 00:20:33.898 "assigned_rate_limits": { 00:20:33.898 "rw_ios_per_sec": 0, 00:20:33.898 "rw_mbytes_per_sec": 0, 00:20:33.898 "r_mbytes_per_sec": 0, 00:20:33.898 "w_mbytes_per_sec": 0 00:20:33.898 }, 00:20:33.898 "claimed": true, 00:20:33.898 "claim_type": "exclusive_write", 00:20:33.898 "zoned": false, 00:20:33.898 "supported_io_types": { 00:20:33.898 "read": true, 00:20:33.898 "write": true, 00:20:33.898 "unmap": true, 00:20:33.898 "flush": true, 00:20:33.898 "reset": true, 00:20:33.898 "nvme_admin": false, 00:20:33.898 "nvme_io": false, 00:20:33.898 "nvme_io_md": false, 00:20:33.898 "write_zeroes": true, 00:20:33.898 "zcopy": true, 00:20:33.898 "get_zone_info": false, 00:20:33.898 "zone_management": false, 00:20:33.898 "zone_append": false, 00:20:33.898 "compare": false, 00:20:33.898 "compare_and_write": false, 00:20:33.898 "abort": true, 00:20:33.898 "seek_hole": false, 00:20:33.898 "seek_data": false, 00:20:33.898 "copy": true, 00:20:33.898 "nvme_iov_md": false 00:20:33.898 }, 00:20:33.898 "memory_domains": [ 00:20:33.898 { 00:20:33.898 "dma_device_id": "system", 00:20:33.898 "dma_device_type": 1 00:20:33.898 }, 00:20:33.898 { 00:20:33.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.898 "dma_device_type": 2 00:20:33.898 } 00:20:33.898 ], 00:20:33.898 "driver_specific": {} 00:20:33.898 }' 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:33.898 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:34.156 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:34.156 "name": "BaseBdev3", 00:20:34.156 "aliases": [ 00:20:34.156 "2ea1ccff-428f-11ef-a0af-c98d8ee52a94" 00:20:34.156 ], 00:20:34.156 "product_name": "Malloc disk", 00:20:34.156 "block_size": 512, 00:20:34.156 "num_blocks": 65536, 00:20:34.156 "uuid": "2ea1ccff-428f-11ef-a0af-c98d8ee52a94", 00:20:34.156 "assigned_rate_limits": { 00:20:34.156 "rw_ios_per_sec": 0, 00:20:34.156 "rw_mbytes_per_sec": 0, 00:20:34.156 "r_mbytes_per_sec": 0, 00:20:34.156 "w_mbytes_per_sec": 0 00:20:34.156 }, 00:20:34.156 "claimed": true, 00:20:34.156 "claim_type": "exclusive_write", 00:20:34.156 "zoned": false, 00:20:34.156 "supported_io_types": { 00:20:34.157 "read": true, 00:20:34.157 "write": true, 00:20:34.157 "unmap": true, 00:20:34.157 "flush": true, 00:20:34.157 "reset": true, 00:20:34.157 "nvme_admin": false, 00:20:34.157 "nvme_io": false, 00:20:34.157 "nvme_io_md": false, 00:20:34.157 "write_zeroes": true, 00:20:34.157 "zcopy": true, 00:20:34.157 "get_zone_info": false, 00:20:34.157 "zone_management": false, 00:20:34.157 "zone_append": false, 00:20:34.157 "compare": false, 00:20:34.157 "compare_and_write": false, 00:20:34.157 "abort": true, 00:20:34.157 "seek_hole": false, 00:20:34.157 "seek_data": false, 00:20:34.157 "copy": true, 00:20:34.157 "nvme_iov_md": false 00:20:34.157 }, 00:20:34.157 "memory_domains": [ 00:20:34.157 { 00:20:34.157 "dma_device_id": "system", 00:20:34.157 "dma_device_type": 1 00:20:34.157 }, 00:20:34.157 { 00:20:34.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.157 "dma_device_type": 2 00:20:34.157 } 00:20:34.157 ], 00:20:34.157 "driver_specific": {} 00:20:34.157 }' 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.157 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:34.415 [2024-07-15 09:47:02.453061] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:34.415 [2024-07-15 09:47:02.453095] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.415 [2024-07-15 09:47:02.453119] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.415 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.674 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.674 "name": "Existed_Raid", 00:20:34.674 "uuid": "2ea1d3a4-428f-11ef-a0af-c98d8ee52a94", 00:20:34.674 "strip_size_kb": 64, 00:20:34.674 "state": "offline", 00:20:34.674 "raid_level": "raid0", 00:20:34.674 "superblock": false, 00:20:34.674 "num_base_bdevs": 3, 00:20:34.674 "num_base_bdevs_discovered": 2, 00:20:34.674 "num_base_bdevs_operational": 2, 00:20:34.674 "base_bdevs_list": [ 00:20:34.674 { 00:20:34.674 "name": null, 00:20:34.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.674 "is_configured": false, 00:20:34.674 "data_offset": 0, 00:20:34.674 "data_size": 65536 00:20:34.674 }, 00:20:34.674 { 00:20:34.674 "name": "BaseBdev2", 00:20:34.674 "uuid": "2df46c12-428f-11ef-a0af-c98d8ee52a94", 00:20:34.674 "is_configured": true, 00:20:34.674 "data_offset": 0, 00:20:34.674 "data_size": 65536 00:20:34.674 }, 00:20:34.674 { 00:20:34.674 "name": "BaseBdev3", 00:20:34.674 "uuid": "2ea1ccff-428f-11ef-a0af-c98d8ee52a94", 00:20:34.674 "is_configured": true, 00:20:34.674 "data_offset": 0, 00:20:34.674 "data_size": 65536 00:20:34.674 } 00:20:34.674 ] 00:20:34.674 }' 00:20:34.674 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.674 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.933 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:34.933 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:34.933 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.933 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:35.192 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:35.192 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:35.192 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:35.451 [2024-07-15 09:47:03.453857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:35.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:35.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:35.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:35.710 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:35.710 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:35.710 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:35.968 [2024-07-15 09:47:03.962806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:35.968 [2024-07-15 09:47:03.962838] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edd7634a00 name Existed_Raid, state offline 00:20:35.968 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:35.968 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:35.968 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.968 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:36.227 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:36.227 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:36.227 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:36.227 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:36.227 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:36.227 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:36.486 BaseBdev2 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.486 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.745 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:37.004 [ 00:20:37.004 { 00:20:37.004 "name": "BaseBdev2", 00:20:37.004 "aliases": [ 00:20:37.004 "312f52c6-428f-11ef-a0af-c98d8ee52a94" 00:20:37.004 ], 00:20:37.004 "product_name": "Malloc disk", 00:20:37.004 "block_size": 512, 00:20:37.004 "num_blocks": 65536, 00:20:37.004 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:37.004 "assigned_rate_limits": { 00:20:37.004 "rw_ios_per_sec": 0, 00:20:37.004 "rw_mbytes_per_sec": 0, 00:20:37.004 "r_mbytes_per_sec": 0, 00:20:37.004 "w_mbytes_per_sec": 0 00:20:37.004 }, 00:20:37.004 "claimed": false, 00:20:37.004 "zoned": false, 00:20:37.004 "supported_io_types": { 00:20:37.004 "read": true, 00:20:37.004 "write": true, 00:20:37.004 "unmap": true, 00:20:37.004 "flush": true, 00:20:37.004 "reset": true, 00:20:37.004 "nvme_admin": false, 00:20:37.004 "nvme_io": false, 00:20:37.004 "nvme_io_md": false, 00:20:37.004 "write_zeroes": true, 00:20:37.004 "zcopy": true, 00:20:37.004 "get_zone_info": false, 00:20:37.004 "zone_management": false, 00:20:37.005 "zone_append": false, 00:20:37.005 "compare": false, 00:20:37.005 "compare_and_write": false, 00:20:37.005 "abort": true, 00:20:37.005 "seek_hole": false, 00:20:37.005 "seek_data": false, 00:20:37.005 "copy": true, 00:20:37.005 "nvme_iov_md": false 00:20:37.005 }, 00:20:37.005 "memory_domains": [ 00:20:37.005 { 00:20:37.005 "dma_device_id": "system", 00:20:37.005 "dma_device_type": 1 00:20:37.005 }, 00:20:37.005 { 00:20:37.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.005 "dma_device_type": 2 00:20:37.005 } 00:20:37.005 ], 00:20:37.005 "driver_specific": {} 00:20:37.005 } 00:20:37.005 ] 00:20:37.005 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:37.005 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:37.005 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:37.005 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:37.264 BaseBdev3 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.264 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:37.525 [ 00:20:37.525 { 00:20:37.525 "name": "BaseBdev3", 00:20:37.525 "aliases": [ 00:20:37.525 "3194a966-428f-11ef-a0af-c98d8ee52a94" 00:20:37.525 ], 00:20:37.525 "product_name": "Malloc disk", 00:20:37.525 "block_size": 512, 00:20:37.525 "num_blocks": 65536, 00:20:37.525 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:37.525 "assigned_rate_limits": { 00:20:37.525 "rw_ios_per_sec": 0, 00:20:37.525 "rw_mbytes_per_sec": 0, 00:20:37.525 "r_mbytes_per_sec": 0, 00:20:37.525 "w_mbytes_per_sec": 0 00:20:37.525 }, 00:20:37.525 "claimed": false, 00:20:37.525 "zoned": false, 00:20:37.525 "supported_io_types": { 00:20:37.525 "read": true, 00:20:37.525 "write": true, 00:20:37.525 "unmap": true, 00:20:37.525 "flush": true, 00:20:37.525 "reset": true, 00:20:37.525 "nvme_admin": false, 00:20:37.525 "nvme_io": false, 00:20:37.525 "nvme_io_md": false, 00:20:37.525 "write_zeroes": true, 00:20:37.525 "zcopy": true, 00:20:37.525 "get_zone_info": false, 00:20:37.525 "zone_management": false, 00:20:37.525 "zone_append": false, 00:20:37.525 "compare": false, 00:20:37.525 "compare_and_write": false, 00:20:37.525 "abort": true, 00:20:37.525 "seek_hole": false, 00:20:37.525 "seek_data": false, 00:20:37.525 "copy": true, 00:20:37.525 "nvme_iov_md": false 00:20:37.525 }, 00:20:37.525 "memory_domains": [ 00:20:37.525 { 00:20:37.525 "dma_device_id": "system", 00:20:37.525 "dma_device_type": 1 00:20:37.525 }, 00:20:37.525 { 00:20:37.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.525 "dma_device_type": 2 00:20:37.525 } 00:20:37.525 ], 00:20:37.525 "driver_specific": {} 00:20:37.525 } 00:20:37.525 ] 00:20:37.526 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:37.526 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:37.526 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:37.526 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:37.785 [2024-07-15 09:47:05.743850] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:37.785 [2024-07-15 09:47:05.743912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:37.786 [2024-07-15 09:47:05.743919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.786 [2024-07-15 09:47:05.744536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.786 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.050 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.050 "name": "Existed_Raid", 00:20:38.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.050 "strip_size_kb": 64, 00:20:38.050 "state": "configuring", 00:20:38.050 "raid_level": "raid0", 00:20:38.050 "superblock": false, 00:20:38.050 "num_base_bdevs": 3, 00:20:38.050 "num_base_bdevs_discovered": 2, 00:20:38.050 "num_base_bdevs_operational": 3, 00:20:38.050 "base_bdevs_list": [ 00:20:38.051 { 00:20:38.051 "name": "BaseBdev1", 00:20:38.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.051 "is_configured": false, 00:20:38.051 "data_offset": 0, 00:20:38.051 "data_size": 0 00:20:38.051 }, 00:20:38.051 { 00:20:38.051 "name": "BaseBdev2", 00:20:38.051 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:38.051 "is_configured": true, 00:20:38.051 "data_offset": 0, 00:20:38.051 "data_size": 65536 00:20:38.051 }, 00:20:38.051 { 00:20:38.051 "name": "BaseBdev3", 00:20:38.051 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:38.051 "is_configured": true, 00:20:38.051 "data_offset": 0, 00:20:38.051 "data_size": 65536 00:20:38.051 } 00:20:38.051 ] 00:20:38.051 }' 00:20:38.051 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.051 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.310 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:38.569 [2024-07-15 09:47:06.447950] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.569 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.827 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.827 "name": "Existed_Raid", 00:20:38.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.828 "strip_size_kb": 64, 00:20:38.828 "state": "configuring", 00:20:38.828 "raid_level": "raid0", 00:20:38.828 "superblock": false, 00:20:38.828 "num_base_bdevs": 3, 00:20:38.828 "num_base_bdevs_discovered": 1, 00:20:38.828 "num_base_bdevs_operational": 3, 00:20:38.828 "base_bdevs_list": [ 00:20:38.828 { 00:20:38.828 "name": "BaseBdev1", 00:20:38.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.828 "is_configured": false, 00:20:38.828 "data_offset": 0, 00:20:38.828 "data_size": 0 00:20:38.828 }, 00:20:38.828 { 00:20:38.828 "name": null, 00:20:38.828 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:38.828 "is_configured": false, 00:20:38.828 "data_offset": 0, 00:20:38.828 "data_size": 65536 00:20:38.828 }, 00:20:38.828 { 00:20:38.828 "name": "BaseBdev3", 00:20:38.828 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:38.828 "is_configured": true, 00:20:38.828 "data_offset": 0, 00:20:38.828 "data_size": 65536 00:20:38.828 } 00:20:38.828 ] 00:20:38.828 }' 00:20:38.828 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.828 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.085 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.085 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:39.343 [2024-07-15 09:47:07.388237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.343 BaseBdev1 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:39.343 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.601 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:39.860 [ 00:20:39.860 { 00:20:39.860 "name": "BaseBdev1", 00:20:39.860 "aliases": [ 00:20:39.860 "32ec57e1-428f-11ef-a0af-c98d8ee52a94" 00:20:39.860 ], 00:20:39.860 "product_name": "Malloc disk", 00:20:39.860 "block_size": 512, 00:20:39.860 "num_blocks": 65536, 00:20:39.860 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:39.860 "assigned_rate_limits": { 00:20:39.860 "rw_ios_per_sec": 0, 00:20:39.860 "rw_mbytes_per_sec": 0, 00:20:39.860 "r_mbytes_per_sec": 0, 00:20:39.860 "w_mbytes_per_sec": 0 00:20:39.860 }, 00:20:39.860 "claimed": true, 00:20:39.860 "claim_type": "exclusive_write", 00:20:39.860 "zoned": false, 00:20:39.860 "supported_io_types": { 00:20:39.860 "read": true, 00:20:39.860 "write": true, 00:20:39.860 "unmap": true, 00:20:39.860 "flush": true, 00:20:39.860 "reset": true, 00:20:39.860 "nvme_admin": false, 00:20:39.860 "nvme_io": false, 00:20:39.860 "nvme_io_md": false, 00:20:39.860 "write_zeroes": true, 00:20:39.860 "zcopy": true, 00:20:39.860 "get_zone_info": false, 00:20:39.860 "zone_management": false, 00:20:39.860 "zone_append": false, 00:20:39.860 "compare": false, 00:20:39.860 "compare_and_write": false, 00:20:39.860 "abort": true, 00:20:39.860 "seek_hole": false, 00:20:39.860 "seek_data": false, 00:20:39.860 "copy": true, 00:20:39.860 "nvme_iov_md": false 00:20:39.860 }, 00:20:39.860 "memory_domains": [ 00:20:39.860 { 00:20:39.860 "dma_device_id": "system", 00:20:39.860 "dma_device_type": 1 00:20:39.860 }, 00:20:39.860 { 00:20:39.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.860 "dma_device_type": 2 00:20:39.860 } 00:20:39.860 ], 00:20:39.860 "driver_specific": {} 00:20:39.860 } 00:20:39.860 ] 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.860 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.119 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.119 "name": "Existed_Raid", 00:20:40.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.119 "strip_size_kb": 64, 00:20:40.119 "state": "configuring", 00:20:40.119 "raid_level": "raid0", 00:20:40.119 "superblock": false, 00:20:40.119 "num_base_bdevs": 3, 00:20:40.119 "num_base_bdevs_discovered": 2, 00:20:40.119 "num_base_bdevs_operational": 3, 00:20:40.119 "base_bdevs_list": [ 00:20:40.119 { 00:20:40.119 "name": "BaseBdev1", 00:20:40.119 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:40.119 "is_configured": true, 00:20:40.119 "data_offset": 0, 00:20:40.119 "data_size": 65536 00:20:40.119 }, 00:20:40.119 { 00:20:40.119 "name": null, 00:20:40.119 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:40.119 "is_configured": false, 00:20:40.119 "data_offset": 0, 00:20:40.119 "data_size": 65536 00:20:40.119 }, 00:20:40.119 { 00:20:40.119 "name": "BaseBdev3", 00:20:40.119 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:40.119 "is_configured": true, 00:20:40.119 "data_offset": 0, 00:20:40.119 "data_size": 65536 00:20:40.119 } 00:20:40.119 ] 00:20:40.119 }' 00:20:40.119 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.119 09:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.377 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.377 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:40.636 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:40.636 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:40.897 [2024-07-15 09:47:08.768315] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.897 "name": "Existed_Raid", 00:20:40.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.897 "strip_size_kb": 64, 00:20:40.897 "state": "configuring", 00:20:40.897 "raid_level": "raid0", 00:20:40.897 "superblock": false, 00:20:40.897 "num_base_bdevs": 3, 00:20:40.897 "num_base_bdevs_discovered": 1, 00:20:40.897 "num_base_bdevs_operational": 3, 00:20:40.897 "base_bdevs_list": [ 00:20:40.897 { 00:20:40.897 "name": "BaseBdev1", 00:20:40.897 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:40.897 "is_configured": true, 00:20:40.897 "data_offset": 0, 00:20:40.897 "data_size": 65536 00:20:40.897 }, 00:20:40.897 { 00:20:40.897 "name": null, 00:20:40.897 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:40.897 "is_configured": false, 00:20:40.897 "data_offset": 0, 00:20:40.897 "data_size": 65536 00:20:40.897 }, 00:20:40.897 { 00:20:40.897 "name": null, 00:20:40.897 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:40.897 "is_configured": false, 00:20:40.897 "data_offset": 0, 00:20:40.897 "data_size": 65536 00:20:40.897 } 00:20:40.897 ] 00:20:40.897 }' 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.897 09:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.499 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.499 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:41.499 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:41.499 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:41.758 [2024-07-15 09:47:09.724469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.758 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.016 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.016 "name": "Existed_Raid", 00:20:42.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.016 "strip_size_kb": 64, 00:20:42.016 "state": "configuring", 00:20:42.016 "raid_level": "raid0", 00:20:42.016 "superblock": false, 00:20:42.016 "num_base_bdevs": 3, 00:20:42.016 "num_base_bdevs_discovered": 2, 00:20:42.016 "num_base_bdevs_operational": 3, 00:20:42.016 "base_bdevs_list": [ 00:20:42.016 { 00:20:42.016 "name": "BaseBdev1", 00:20:42.016 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:42.016 "is_configured": true, 00:20:42.016 "data_offset": 0, 00:20:42.016 "data_size": 65536 00:20:42.016 }, 00:20:42.016 { 00:20:42.016 "name": null, 00:20:42.016 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:42.016 "is_configured": false, 00:20:42.016 "data_offset": 0, 00:20:42.016 "data_size": 65536 00:20:42.016 }, 00:20:42.016 { 00:20:42.016 "name": "BaseBdev3", 00:20:42.016 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:42.016 "is_configured": true, 00:20:42.016 "data_offset": 0, 00:20:42.016 "data_size": 65536 00:20:42.016 } 00:20:42.016 ] 00:20:42.016 }' 00:20:42.016 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.016 09:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.275 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.275 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:42.534 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:42.534 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:42.793 [2024-07-15 09:47:10.640618] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.793 "name": "Existed_Raid", 00:20:42.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.793 "strip_size_kb": 64, 00:20:42.793 "state": "configuring", 00:20:42.793 "raid_level": "raid0", 00:20:42.793 "superblock": false, 00:20:42.793 "num_base_bdevs": 3, 00:20:42.793 "num_base_bdevs_discovered": 1, 00:20:42.793 "num_base_bdevs_operational": 3, 00:20:42.793 "base_bdevs_list": [ 00:20:42.793 { 00:20:42.793 "name": null, 00:20:42.793 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:42.793 "is_configured": false, 00:20:42.793 "data_offset": 0, 00:20:42.793 "data_size": 65536 00:20:42.793 }, 00:20:42.793 { 00:20:42.793 "name": null, 00:20:42.793 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:42.793 "is_configured": false, 00:20:42.793 "data_offset": 0, 00:20:42.793 "data_size": 65536 00:20:42.793 }, 00:20:42.793 { 00:20:42.793 "name": "BaseBdev3", 00:20:42.793 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:42.793 "is_configured": true, 00:20:42.793 "data_offset": 0, 00:20:42.793 "data_size": 65536 00:20:42.793 } 00:20:42.793 ] 00:20:42.793 }' 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.793 09:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.053 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:43.053 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.312 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:43.312 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:43.572 [2024-07-15 09:47:11.529163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.572 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.846 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.846 "name": "Existed_Raid", 00:20:43.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.846 "strip_size_kb": 64, 00:20:43.846 "state": "configuring", 00:20:43.846 "raid_level": "raid0", 00:20:43.846 "superblock": false, 00:20:43.846 "num_base_bdevs": 3, 00:20:43.846 "num_base_bdevs_discovered": 2, 00:20:43.846 "num_base_bdevs_operational": 3, 00:20:43.846 "base_bdevs_list": [ 00:20:43.846 { 00:20:43.846 "name": null, 00:20:43.846 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:43.846 "is_configured": false, 00:20:43.846 "data_offset": 0, 00:20:43.846 "data_size": 65536 00:20:43.846 }, 00:20:43.846 { 00:20:43.846 "name": "BaseBdev2", 00:20:43.846 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:43.846 "is_configured": true, 00:20:43.846 "data_offset": 0, 00:20:43.846 "data_size": 65536 00:20:43.846 }, 00:20:43.846 { 00:20:43.846 "name": "BaseBdev3", 00:20:43.846 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:43.846 "is_configured": true, 00:20:43.846 "data_offset": 0, 00:20:43.846 "data_size": 65536 00:20:43.846 } 00:20:43.846 ] 00:20:43.846 }' 00:20:43.846 09:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.846 09:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.106 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.106 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:44.365 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:44.365 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.365 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 32ec57e1-428f-11ef-a0af-c98d8ee52a94 00:20:44.624 [2024-07-15 09:47:12.669387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:44.624 [2024-07-15 09:47:12.669414] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2edd7634a00 00:20:44.624 [2024-07-15 09:47:12.669418] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:44.624 [2024-07-15 09:47:12.669439] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2edd7697e20 00:20:44.624 [2024-07-15 09:47:12.669509] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2edd7634a00 00:20:44.624 [2024-07-15 09:47:12.669513] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2edd7634a00 00:20:44.624 [2024-07-15 09:47:12.669540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.624 NewBaseBdev 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:44.624 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.884 09:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:45.143 [ 00:20:45.143 { 00:20:45.143 "name": "NewBaseBdev", 00:20:45.143 "aliases": [ 00:20:45.143 "32ec57e1-428f-11ef-a0af-c98d8ee52a94" 00:20:45.143 ], 00:20:45.143 "product_name": "Malloc disk", 00:20:45.143 "block_size": 512, 00:20:45.143 "num_blocks": 65536, 00:20:45.143 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:45.143 "assigned_rate_limits": { 00:20:45.143 "rw_ios_per_sec": 0, 00:20:45.143 "rw_mbytes_per_sec": 0, 00:20:45.143 "r_mbytes_per_sec": 0, 00:20:45.143 "w_mbytes_per_sec": 0 00:20:45.143 }, 00:20:45.143 "claimed": true, 00:20:45.143 "claim_type": "exclusive_write", 00:20:45.143 "zoned": false, 00:20:45.143 "supported_io_types": { 00:20:45.143 "read": true, 00:20:45.143 "write": true, 00:20:45.143 "unmap": true, 00:20:45.143 "flush": true, 00:20:45.143 "reset": true, 00:20:45.143 "nvme_admin": false, 00:20:45.143 "nvme_io": false, 00:20:45.143 "nvme_io_md": false, 00:20:45.143 "write_zeroes": true, 00:20:45.143 "zcopy": true, 00:20:45.143 "get_zone_info": false, 00:20:45.143 "zone_management": false, 00:20:45.143 "zone_append": false, 00:20:45.143 "compare": false, 00:20:45.143 "compare_and_write": false, 00:20:45.143 "abort": true, 00:20:45.143 "seek_hole": false, 00:20:45.143 "seek_data": false, 00:20:45.143 "copy": true, 00:20:45.143 "nvme_iov_md": false 00:20:45.143 }, 00:20:45.143 "memory_domains": [ 00:20:45.143 { 00:20:45.143 "dma_device_id": "system", 00:20:45.143 "dma_device_type": 1 00:20:45.143 }, 00:20:45.143 { 00:20:45.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.143 "dma_device_type": 2 00:20:45.143 } 00:20:45.143 ], 00:20:45.143 "driver_specific": {} 00:20:45.143 } 00:20:45.143 ] 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.143 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.400 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.400 "name": "Existed_Raid", 00:20:45.400 "uuid": "36123565-428f-11ef-a0af-c98d8ee52a94", 00:20:45.400 "strip_size_kb": 64, 00:20:45.400 "state": "online", 00:20:45.400 "raid_level": "raid0", 00:20:45.400 "superblock": false, 00:20:45.400 "num_base_bdevs": 3, 00:20:45.400 "num_base_bdevs_discovered": 3, 00:20:45.400 "num_base_bdevs_operational": 3, 00:20:45.400 "base_bdevs_list": [ 00:20:45.400 { 00:20:45.400 "name": "NewBaseBdev", 00:20:45.400 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:45.400 "is_configured": true, 00:20:45.400 "data_offset": 0, 00:20:45.400 "data_size": 65536 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "name": "BaseBdev2", 00:20:45.400 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:45.400 "is_configured": true, 00:20:45.400 "data_offset": 0, 00:20:45.400 "data_size": 65536 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "name": "BaseBdev3", 00:20:45.400 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:45.400 "is_configured": true, 00:20:45.400 "data_offset": 0, 00:20:45.400 "data_size": 65536 00:20:45.400 } 00:20:45.400 ] 00:20:45.400 }' 00:20:45.400 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.400 09:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:45.658 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:45.658 [2024-07-15 09:47:13.753407] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.917 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:45.917 "name": "Existed_Raid", 00:20:45.917 "aliases": [ 00:20:45.917 "36123565-428f-11ef-a0af-c98d8ee52a94" 00:20:45.917 ], 00:20:45.917 "product_name": "Raid Volume", 00:20:45.917 "block_size": 512, 00:20:45.917 "num_blocks": 196608, 00:20:45.917 "uuid": "36123565-428f-11ef-a0af-c98d8ee52a94", 00:20:45.917 "assigned_rate_limits": { 00:20:45.917 "rw_ios_per_sec": 0, 00:20:45.917 "rw_mbytes_per_sec": 0, 00:20:45.917 "r_mbytes_per_sec": 0, 00:20:45.917 "w_mbytes_per_sec": 0 00:20:45.917 }, 00:20:45.917 "claimed": false, 00:20:45.917 "zoned": false, 00:20:45.917 "supported_io_types": { 00:20:45.917 "read": true, 00:20:45.917 "write": true, 00:20:45.917 "unmap": true, 00:20:45.917 "flush": true, 00:20:45.917 "reset": true, 00:20:45.917 "nvme_admin": false, 00:20:45.917 "nvme_io": false, 00:20:45.917 "nvme_io_md": false, 00:20:45.917 "write_zeroes": true, 00:20:45.917 "zcopy": false, 00:20:45.917 "get_zone_info": false, 00:20:45.917 "zone_management": false, 00:20:45.917 "zone_append": false, 00:20:45.917 "compare": false, 00:20:45.917 "compare_and_write": false, 00:20:45.917 "abort": false, 00:20:45.917 "seek_hole": false, 00:20:45.917 "seek_data": false, 00:20:45.917 "copy": false, 00:20:45.917 "nvme_iov_md": false 00:20:45.917 }, 00:20:45.917 "memory_domains": [ 00:20:45.917 { 00:20:45.917 "dma_device_id": "system", 00:20:45.917 "dma_device_type": 1 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.917 "dma_device_type": 2 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "dma_device_id": "system", 00:20:45.917 "dma_device_type": 1 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.917 "dma_device_type": 2 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "dma_device_id": "system", 00:20:45.917 "dma_device_type": 1 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.917 "dma_device_type": 2 00:20:45.917 } 00:20:45.917 ], 00:20:45.917 "driver_specific": { 00:20:45.917 "raid": { 00:20:45.917 "uuid": "36123565-428f-11ef-a0af-c98d8ee52a94", 00:20:45.917 "strip_size_kb": 64, 00:20:45.917 "state": "online", 00:20:45.917 "raid_level": "raid0", 00:20:45.917 "superblock": false, 00:20:45.917 "num_base_bdevs": 3, 00:20:45.917 "num_base_bdevs_discovered": 3, 00:20:45.918 "num_base_bdevs_operational": 3, 00:20:45.918 "base_bdevs_list": [ 00:20:45.918 { 00:20:45.918 "name": "NewBaseBdev", 00:20:45.918 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:45.918 "is_configured": true, 00:20:45.918 "data_offset": 0, 00:20:45.918 "data_size": 65536 00:20:45.918 }, 00:20:45.918 { 00:20:45.918 "name": "BaseBdev2", 00:20:45.918 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:45.918 "is_configured": true, 00:20:45.918 "data_offset": 0, 00:20:45.918 "data_size": 65536 00:20:45.918 }, 00:20:45.918 { 00:20:45.918 "name": "BaseBdev3", 00:20:45.918 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:45.918 "is_configured": true, 00:20:45.918 "data_offset": 0, 00:20:45.918 "data_size": 65536 00:20:45.918 } 00:20:45.918 ] 00:20:45.918 } 00:20:45.918 } 00:20:45.918 }' 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:45.918 BaseBdev2 00:20:45.918 BaseBdev3' 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:45.918 "name": "NewBaseBdev", 00:20:45.918 "aliases": [ 00:20:45.918 "32ec57e1-428f-11ef-a0af-c98d8ee52a94" 00:20:45.918 ], 00:20:45.918 "product_name": "Malloc disk", 00:20:45.918 "block_size": 512, 00:20:45.918 "num_blocks": 65536, 00:20:45.918 "uuid": "32ec57e1-428f-11ef-a0af-c98d8ee52a94", 00:20:45.918 "assigned_rate_limits": { 00:20:45.918 "rw_ios_per_sec": 0, 00:20:45.918 "rw_mbytes_per_sec": 0, 00:20:45.918 "r_mbytes_per_sec": 0, 00:20:45.918 "w_mbytes_per_sec": 0 00:20:45.918 }, 00:20:45.918 "claimed": true, 00:20:45.918 "claim_type": "exclusive_write", 00:20:45.918 "zoned": false, 00:20:45.918 "supported_io_types": { 00:20:45.918 "read": true, 00:20:45.918 "write": true, 00:20:45.918 "unmap": true, 00:20:45.918 "flush": true, 00:20:45.918 "reset": true, 00:20:45.918 "nvme_admin": false, 00:20:45.918 "nvme_io": false, 00:20:45.918 "nvme_io_md": false, 00:20:45.918 "write_zeroes": true, 00:20:45.918 "zcopy": true, 00:20:45.918 "get_zone_info": false, 00:20:45.918 "zone_management": false, 00:20:45.918 "zone_append": false, 00:20:45.918 "compare": false, 00:20:45.918 "compare_and_write": false, 00:20:45.918 "abort": true, 00:20:45.918 "seek_hole": false, 00:20:45.918 "seek_data": false, 00:20:45.918 "copy": true, 00:20:45.918 "nvme_iov_md": false 00:20:45.918 }, 00:20:45.918 "memory_domains": [ 00:20:45.918 { 00:20:45.918 "dma_device_id": "system", 00:20:45.918 "dma_device_type": 1 00:20:45.918 }, 00:20:45.918 { 00:20:45.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.918 "dma_device_type": 2 00:20:45.918 } 00:20:45.918 ], 00:20:45.918 "driver_specific": {} 00:20:45.918 }' 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:45.918 09:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:45.918 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:45.918 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:45.918 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:46.177 "name": "BaseBdev2", 00:20:46.177 "aliases": [ 00:20:46.177 "312f52c6-428f-11ef-a0af-c98d8ee52a94" 00:20:46.177 ], 00:20:46.177 "product_name": "Malloc disk", 00:20:46.177 "block_size": 512, 00:20:46.177 "num_blocks": 65536, 00:20:46.177 "uuid": "312f52c6-428f-11ef-a0af-c98d8ee52a94", 00:20:46.177 "assigned_rate_limits": { 00:20:46.177 "rw_ios_per_sec": 0, 00:20:46.177 "rw_mbytes_per_sec": 0, 00:20:46.177 "r_mbytes_per_sec": 0, 00:20:46.177 "w_mbytes_per_sec": 0 00:20:46.177 }, 00:20:46.177 "claimed": true, 00:20:46.177 "claim_type": "exclusive_write", 00:20:46.177 "zoned": false, 00:20:46.177 "supported_io_types": { 00:20:46.177 "read": true, 00:20:46.177 "write": true, 00:20:46.177 "unmap": true, 00:20:46.177 "flush": true, 00:20:46.177 "reset": true, 00:20:46.177 "nvme_admin": false, 00:20:46.177 "nvme_io": false, 00:20:46.177 "nvme_io_md": false, 00:20:46.177 "write_zeroes": true, 00:20:46.177 "zcopy": true, 00:20:46.177 "get_zone_info": false, 00:20:46.177 "zone_management": false, 00:20:46.177 "zone_append": false, 00:20:46.177 "compare": false, 00:20:46.177 "compare_and_write": false, 00:20:46.177 "abort": true, 00:20:46.177 "seek_hole": false, 00:20:46.177 "seek_data": false, 00:20:46.177 "copy": true, 00:20:46.177 "nvme_iov_md": false 00:20:46.177 }, 00:20:46.177 "memory_domains": [ 00:20:46.177 { 00:20:46.177 "dma_device_id": "system", 00:20:46.177 "dma_device_type": 1 00:20:46.177 }, 00:20:46.177 { 00:20:46.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.177 "dma_device_type": 2 00:20:46.177 } 00:20:46.177 ], 00:20:46.177 "driver_specific": {} 00:20:46.177 }' 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.177 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:46.437 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:46.696 "name": "BaseBdev3", 00:20:46.696 "aliases": [ 00:20:46.696 "3194a966-428f-11ef-a0af-c98d8ee52a94" 00:20:46.696 ], 00:20:46.696 "product_name": "Malloc disk", 00:20:46.696 "block_size": 512, 00:20:46.696 "num_blocks": 65536, 00:20:46.696 "uuid": "3194a966-428f-11ef-a0af-c98d8ee52a94", 00:20:46.696 "assigned_rate_limits": { 00:20:46.696 "rw_ios_per_sec": 0, 00:20:46.696 "rw_mbytes_per_sec": 0, 00:20:46.696 "r_mbytes_per_sec": 0, 00:20:46.696 "w_mbytes_per_sec": 0 00:20:46.696 }, 00:20:46.696 "claimed": true, 00:20:46.696 "claim_type": "exclusive_write", 00:20:46.696 "zoned": false, 00:20:46.696 "supported_io_types": { 00:20:46.696 "read": true, 00:20:46.696 "write": true, 00:20:46.696 "unmap": true, 00:20:46.696 "flush": true, 00:20:46.696 "reset": true, 00:20:46.696 "nvme_admin": false, 00:20:46.696 "nvme_io": false, 00:20:46.696 "nvme_io_md": false, 00:20:46.696 "write_zeroes": true, 00:20:46.696 "zcopy": true, 00:20:46.696 "get_zone_info": false, 00:20:46.696 "zone_management": false, 00:20:46.696 "zone_append": false, 00:20:46.696 "compare": false, 00:20:46.696 "compare_and_write": false, 00:20:46.696 "abort": true, 00:20:46.696 "seek_hole": false, 00:20:46.696 "seek_data": false, 00:20:46.696 "copy": true, 00:20:46.696 "nvme_iov_md": false 00:20:46.696 }, 00:20:46.696 "memory_domains": [ 00:20:46.696 { 00:20:46.696 "dma_device_id": "system", 00:20:46.696 "dma_device_type": 1 00:20:46.696 }, 00:20:46.696 { 00:20:46.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.696 "dma_device_type": 2 00:20:46.696 } 00:20:46.696 ], 00:20:46.696 "driver_specific": {} 00:20:46.696 }' 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:46.696 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:46.960 [2024-07-15 09:47:14.817522] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:46.960 [2024-07-15 09:47:14.817551] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.960 [2024-07-15 09:47:14.817564] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.960 [2024-07-15 09:47:14.817574] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.960 [2024-07-15 09:47:14.817578] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2edd7634a00 name Existed_Raid, state offline 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51928 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51928 ']' 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51928 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51928 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:20:46.960 killing process with pid 51928 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51928' 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51928 00:20:46.960 [2024-07-15 09:47:14.853259] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:46.960 09:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51928 00:20:46.960 [2024-07-15 09:47:14.879806] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:47.219 00:20:47.219 real 0m20.475s 00:20:47.219 user 0m36.554s 00:20:47.219 sys 0m3.638s 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.219 ************************************ 00:20:47.219 END TEST raid_state_function_test 00:20:47.219 ************************************ 00:20:47.219 09:47:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:47.219 09:47:15 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:20:47.219 09:47:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:47.219 09:47:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.219 09:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.219 ************************************ 00:20:47.219 START TEST raid_state_function_test_sb 00:20:47.219 ************************************ 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:47.219 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52641 00:20:47.220 Process raid pid: 52641 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52641' 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52641 /var/tmp/spdk-raid.sock 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52641 ']' 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.220 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.220 [2024-07-15 09:47:15.208277] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:47.220 [2024-07-15 09:47:15.208599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:48.165 EAL: TSC is not safe to use in SMP mode 00:20:48.165 EAL: TSC is not invariant 00:20:48.165 [2024-07-15 09:47:15.941788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.165 [2024-07-15 09:47:16.057182] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:48.165 [2024-07-15 09:47:16.059599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.165 [2024-07-15 09:47:16.060275] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.165 [2024-07-15 09:47:16.060286] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.165 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.165 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:48.166 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:48.424 [2024-07-15 09:47:16.363082] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:48.424 [2024-07-15 09:47:16.363134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:48.424 [2024-07-15 09:47:16.363139] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:48.424 [2024-07-15 09:47:16.363146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:48.424 [2024-07-15 09:47:16.363149] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:48.424 [2024-07-15 09:47:16.363155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:48.424 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:48.424 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.424 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.424 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.425 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.683 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.683 "name": "Existed_Raid", 00:20:48.683 "uuid": "3845d0a4-428f-11ef-a0af-c98d8ee52a94", 00:20:48.683 "strip_size_kb": 64, 00:20:48.683 "state": "configuring", 00:20:48.683 "raid_level": "raid0", 00:20:48.683 "superblock": true, 00:20:48.683 "num_base_bdevs": 3, 00:20:48.683 "num_base_bdevs_discovered": 0, 00:20:48.683 "num_base_bdevs_operational": 3, 00:20:48.683 "base_bdevs_list": [ 00:20:48.683 { 00:20:48.683 "name": "BaseBdev1", 00:20:48.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.683 "is_configured": false, 00:20:48.683 "data_offset": 0, 00:20:48.683 "data_size": 0 00:20:48.683 }, 00:20:48.683 { 00:20:48.683 "name": "BaseBdev2", 00:20:48.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.683 "is_configured": false, 00:20:48.683 "data_offset": 0, 00:20:48.683 "data_size": 0 00:20:48.683 }, 00:20:48.683 { 00:20:48.683 "name": "BaseBdev3", 00:20:48.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.683 "is_configured": false, 00:20:48.683 "data_offset": 0, 00:20:48.683 "data_size": 0 00:20:48.683 } 00:20:48.683 ] 00:20:48.683 }' 00:20:48.683 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.683 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.941 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:49.199 [2024-07-15 09:47:17.059135] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:49.199 [2024-07-15 09:47:17.059161] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d7503c34500 name Existed_Raid, state configuring 00:20:49.199 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:49.199 [2024-07-15 09:47:17.263167] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:49.199 [2024-07-15 09:47:17.263218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:49.199 [2024-07-15 09:47:17.263221] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:49.199 [2024-07-15 09:47:17.263228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:49.199 [2024-07-15 09:47:17.263230] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:49.199 [2024-07-15 09:47:17.263237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:49.199 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:49.458 [2024-07-15 09:47:17.492324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.458 BaseBdev1 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:49.458 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:49.715 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:49.992 [ 00:20:49.992 { 00:20:49.992 "name": "BaseBdev1", 00:20:49.992 "aliases": [ 00:20:49.992 "38f1f421-428f-11ef-a0af-c98d8ee52a94" 00:20:49.992 ], 00:20:49.992 "product_name": "Malloc disk", 00:20:49.992 "block_size": 512, 00:20:49.992 "num_blocks": 65536, 00:20:49.992 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:49.992 "assigned_rate_limits": { 00:20:49.992 "rw_ios_per_sec": 0, 00:20:49.992 "rw_mbytes_per_sec": 0, 00:20:49.992 "r_mbytes_per_sec": 0, 00:20:49.992 "w_mbytes_per_sec": 0 00:20:49.992 }, 00:20:49.992 "claimed": true, 00:20:49.992 "claim_type": "exclusive_write", 00:20:49.992 "zoned": false, 00:20:49.992 "supported_io_types": { 00:20:49.992 "read": true, 00:20:49.992 "write": true, 00:20:49.992 "unmap": true, 00:20:49.992 "flush": true, 00:20:49.992 "reset": true, 00:20:49.992 "nvme_admin": false, 00:20:49.992 "nvme_io": false, 00:20:49.992 "nvme_io_md": false, 00:20:49.992 "write_zeroes": true, 00:20:49.992 "zcopy": true, 00:20:49.992 "get_zone_info": false, 00:20:49.992 "zone_management": false, 00:20:49.992 "zone_append": false, 00:20:49.992 "compare": false, 00:20:49.992 "compare_and_write": false, 00:20:49.992 "abort": true, 00:20:49.992 "seek_hole": false, 00:20:49.992 "seek_data": false, 00:20:49.992 "copy": true, 00:20:49.992 "nvme_iov_md": false 00:20:49.992 }, 00:20:49.992 "memory_domains": [ 00:20:49.992 { 00:20:49.992 "dma_device_id": "system", 00:20:49.992 "dma_device_type": 1 00:20:49.992 }, 00:20:49.992 { 00:20:49.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.992 "dma_device_type": 2 00:20:49.992 } 00:20:49.992 ], 00:20:49.993 "driver_specific": {} 00:20:49.993 } 00:20:49.993 ] 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.993 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.250 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.250 "name": "Existed_Raid", 00:20:50.250 "uuid": "38cf285d-428f-11ef-a0af-c98d8ee52a94", 00:20:50.250 "strip_size_kb": 64, 00:20:50.250 "state": "configuring", 00:20:50.250 "raid_level": "raid0", 00:20:50.250 "superblock": true, 00:20:50.250 "num_base_bdevs": 3, 00:20:50.250 "num_base_bdevs_discovered": 1, 00:20:50.250 "num_base_bdevs_operational": 3, 00:20:50.250 "base_bdevs_list": [ 00:20:50.250 { 00:20:50.250 "name": "BaseBdev1", 00:20:50.250 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:50.250 "is_configured": true, 00:20:50.250 "data_offset": 2048, 00:20:50.250 "data_size": 63488 00:20:50.250 }, 00:20:50.250 { 00:20:50.250 "name": "BaseBdev2", 00:20:50.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.250 "is_configured": false, 00:20:50.250 "data_offset": 0, 00:20:50.250 "data_size": 0 00:20:50.250 }, 00:20:50.250 { 00:20:50.250 "name": "BaseBdev3", 00:20:50.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.250 "is_configured": false, 00:20:50.250 "data_offset": 0, 00:20:50.250 "data_size": 0 00:20:50.250 } 00:20:50.250 ] 00:20:50.250 }' 00:20:50.250 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.250 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.530 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:50.530 [2024-07-15 09:47:18.583336] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:50.530 [2024-07-15 09:47:18.583382] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d7503c34500 name Existed_Raid, state configuring 00:20:50.530 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:50.793 [2024-07-15 09:47:18.791365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.793 [2024-07-15 09:47:18.792278] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:50.793 [2024-07-15 09:47:18.792327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:50.793 [2024-07-15 09:47:18.792332] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:50.793 [2024-07-15 09:47:18.792338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.793 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.051 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.051 "name": "Existed_Raid", 00:20:51.051 "uuid": "39b85774-428f-11ef-a0af-c98d8ee52a94", 00:20:51.051 "strip_size_kb": 64, 00:20:51.051 "state": "configuring", 00:20:51.051 "raid_level": "raid0", 00:20:51.051 "superblock": true, 00:20:51.051 "num_base_bdevs": 3, 00:20:51.051 "num_base_bdevs_discovered": 1, 00:20:51.051 "num_base_bdevs_operational": 3, 00:20:51.051 "base_bdevs_list": [ 00:20:51.051 { 00:20:51.051 "name": "BaseBdev1", 00:20:51.051 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:51.051 "is_configured": true, 00:20:51.051 "data_offset": 2048, 00:20:51.051 "data_size": 63488 00:20:51.051 }, 00:20:51.051 { 00:20:51.051 "name": "BaseBdev2", 00:20:51.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.051 "is_configured": false, 00:20:51.051 "data_offset": 0, 00:20:51.051 "data_size": 0 00:20:51.051 }, 00:20:51.051 { 00:20:51.051 "name": "BaseBdev3", 00:20:51.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.051 "is_configured": false, 00:20:51.051 "data_offset": 0, 00:20:51.051 "data_size": 0 00:20:51.051 } 00:20:51.051 ] 00:20:51.051 }' 00:20:51.051 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.051 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.309 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:51.568 [2024-07-15 09:47:19.467575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:51.568 BaseBdev2 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:51.568 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:51.829 [ 00:20:51.829 { 00:20:51.829 "name": "BaseBdev2", 00:20:51.829 "aliases": [ 00:20:51.829 "3a1f810f-428f-11ef-a0af-c98d8ee52a94" 00:20:51.829 ], 00:20:51.829 "product_name": "Malloc disk", 00:20:51.829 "block_size": 512, 00:20:51.829 "num_blocks": 65536, 00:20:51.829 "uuid": "3a1f810f-428f-11ef-a0af-c98d8ee52a94", 00:20:51.829 "assigned_rate_limits": { 00:20:51.829 "rw_ios_per_sec": 0, 00:20:51.829 "rw_mbytes_per_sec": 0, 00:20:51.829 "r_mbytes_per_sec": 0, 00:20:51.829 "w_mbytes_per_sec": 0 00:20:51.829 }, 00:20:51.829 "claimed": true, 00:20:51.829 "claim_type": "exclusive_write", 00:20:51.829 "zoned": false, 00:20:51.829 "supported_io_types": { 00:20:51.829 "read": true, 00:20:51.829 "write": true, 00:20:51.829 "unmap": true, 00:20:51.829 "flush": true, 00:20:51.829 "reset": true, 00:20:51.829 "nvme_admin": false, 00:20:51.829 "nvme_io": false, 00:20:51.829 "nvme_io_md": false, 00:20:51.829 "write_zeroes": true, 00:20:51.829 "zcopy": true, 00:20:51.829 "get_zone_info": false, 00:20:51.829 "zone_management": false, 00:20:51.829 "zone_append": false, 00:20:51.829 "compare": false, 00:20:51.829 "compare_and_write": false, 00:20:51.829 "abort": true, 00:20:51.829 "seek_hole": false, 00:20:51.829 "seek_data": false, 00:20:51.829 "copy": true, 00:20:51.829 "nvme_iov_md": false 00:20:51.829 }, 00:20:51.829 "memory_domains": [ 00:20:51.829 { 00:20:51.829 "dma_device_id": "system", 00:20:51.829 "dma_device_type": 1 00:20:51.829 }, 00:20:51.829 { 00:20:51.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.829 "dma_device_type": 2 00:20:51.829 } 00:20:51.829 ], 00:20:51.829 "driver_specific": {} 00:20:51.829 } 00:20:51.829 ] 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.829 09:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.089 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.089 "name": "Existed_Raid", 00:20:52.089 "uuid": "39b85774-428f-11ef-a0af-c98d8ee52a94", 00:20:52.089 "strip_size_kb": 64, 00:20:52.089 "state": "configuring", 00:20:52.089 "raid_level": "raid0", 00:20:52.089 "superblock": true, 00:20:52.089 "num_base_bdevs": 3, 00:20:52.089 "num_base_bdevs_discovered": 2, 00:20:52.089 "num_base_bdevs_operational": 3, 00:20:52.089 "base_bdevs_list": [ 00:20:52.089 { 00:20:52.089 "name": "BaseBdev1", 00:20:52.089 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:52.089 "is_configured": true, 00:20:52.089 "data_offset": 2048, 00:20:52.089 "data_size": 63488 00:20:52.089 }, 00:20:52.089 { 00:20:52.089 "name": "BaseBdev2", 00:20:52.089 "uuid": "3a1f810f-428f-11ef-a0af-c98d8ee52a94", 00:20:52.089 "is_configured": true, 00:20:52.089 "data_offset": 2048, 00:20:52.089 "data_size": 63488 00:20:52.089 }, 00:20:52.089 { 00:20:52.089 "name": "BaseBdev3", 00:20:52.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.089 "is_configured": false, 00:20:52.089 "data_offset": 0, 00:20:52.089 "data_size": 0 00:20:52.089 } 00:20:52.089 ] 00:20:52.089 }' 00:20:52.089 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.089 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.348 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:52.608 [2024-07-15 09:47:20.495690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:52.608 [2024-07-15 09:47:20.495760] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d7503c34a00 00:20:52.608 [2024-07-15 09:47:20.495765] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:52.608 [2024-07-15 09:47:20.495782] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d7503c97e20 00:20:52.608 [2024-07-15 09:47:20.495823] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d7503c34a00 00:20:52.608 [2024-07-15 09:47:20.495826] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d7503c34a00 00:20:52.608 [2024-07-15 09:47:20.495843] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.608 BaseBdev3 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:52.608 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:52.868 [ 00:20:52.868 { 00:20:52.868 "name": "BaseBdev3", 00:20:52.868 "aliases": [ 00:20:52.868 "3abc6276-428f-11ef-a0af-c98d8ee52a94" 00:20:52.868 ], 00:20:52.868 "product_name": "Malloc disk", 00:20:52.868 "block_size": 512, 00:20:52.868 "num_blocks": 65536, 00:20:52.868 "uuid": "3abc6276-428f-11ef-a0af-c98d8ee52a94", 00:20:52.868 "assigned_rate_limits": { 00:20:52.868 "rw_ios_per_sec": 0, 00:20:52.868 "rw_mbytes_per_sec": 0, 00:20:52.868 "r_mbytes_per_sec": 0, 00:20:52.868 "w_mbytes_per_sec": 0 00:20:52.868 }, 00:20:52.868 "claimed": true, 00:20:52.868 "claim_type": "exclusive_write", 00:20:52.868 "zoned": false, 00:20:52.868 "supported_io_types": { 00:20:52.868 "read": true, 00:20:52.868 "write": true, 00:20:52.868 "unmap": true, 00:20:52.868 "flush": true, 00:20:52.868 "reset": true, 00:20:52.868 "nvme_admin": false, 00:20:52.868 "nvme_io": false, 00:20:52.868 "nvme_io_md": false, 00:20:52.868 "write_zeroes": true, 00:20:52.868 "zcopy": true, 00:20:52.868 "get_zone_info": false, 00:20:52.868 "zone_management": false, 00:20:52.868 "zone_append": false, 00:20:52.868 "compare": false, 00:20:52.868 "compare_and_write": false, 00:20:52.868 "abort": true, 00:20:52.868 "seek_hole": false, 00:20:52.868 "seek_data": false, 00:20:52.868 "copy": true, 00:20:52.868 "nvme_iov_md": false 00:20:52.868 }, 00:20:52.868 "memory_domains": [ 00:20:52.868 { 00:20:52.868 "dma_device_id": "system", 00:20:52.868 "dma_device_type": 1 00:20:52.868 }, 00:20:52.868 { 00:20:52.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.868 "dma_device_type": 2 00:20:52.868 } 00:20:52.868 ], 00:20:52.868 "driver_specific": {} 00:20:52.868 } 00:20:52.868 ] 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.868 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.128 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.128 "name": "Existed_Raid", 00:20:53.128 "uuid": "39b85774-428f-11ef-a0af-c98d8ee52a94", 00:20:53.128 "strip_size_kb": 64, 00:20:53.128 "state": "online", 00:20:53.128 "raid_level": "raid0", 00:20:53.128 "superblock": true, 00:20:53.128 "num_base_bdevs": 3, 00:20:53.128 "num_base_bdevs_discovered": 3, 00:20:53.128 "num_base_bdevs_operational": 3, 00:20:53.128 "base_bdevs_list": [ 00:20:53.128 { 00:20:53.128 "name": "BaseBdev1", 00:20:53.128 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:53.128 "is_configured": true, 00:20:53.128 "data_offset": 2048, 00:20:53.128 "data_size": 63488 00:20:53.128 }, 00:20:53.128 { 00:20:53.128 "name": "BaseBdev2", 00:20:53.128 "uuid": "3a1f810f-428f-11ef-a0af-c98d8ee52a94", 00:20:53.128 "is_configured": true, 00:20:53.128 "data_offset": 2048, 00:20:53.128 "data_size": 63488 00:20:53.128 }, 00:20:53.128 { 00:20:53.128 "name": "BaseBdev3", 00:20:53.128 "uuid": "3abc6276-428f-11ef-a0af-c98d8ee52a94", 00:20:53.128 "is_configured": true, 00:20:53.128 "data_offset": 2048, 00:20:53.128 "data_size": 63488 00:20:53.128 } 00:20:53.128 ] 00:20:53.128 }' 00:20:53.128 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.128 09:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:53.387 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:53.645 [2024-07-15 09:47:21.591670] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.645 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:53.645 "name": "Existed_Raid", 00:20:53.645 "aliases": [ 00:20:53.645 "39b85774-428f-11ef-a0af-c98d8ee52a94" 00:20:53.645 ], 00:20:53.645 "product_name": "Raid Volume", 00:20:53.645 "block_size": 512, 00:20:53.645 "num_blocks": 190464, 00:20:53.645 "uuid": "39b85774-428f-11ef-a0af-c98d8ee52a94", 00:20:53.645 "assigned_rate_limits": { 00:20:53.645 "rw_ios_per_sec": 0, 00:20:53.645 "rw_mbytes_per_sec": 0, 00:20:53.645 "r_mbytes_per_sec": 0, 00:20:53.645 "w_mbytes_per_sec": 0 00:20:53.645 }, 00:20:53.645 "claimed": false, 00:20:53.645 "zoned": false, 00:20:53.645 "supported_io_types": { 00:20:53.645 "read": true, 00:20:53.645 "write": true, 00:20:53.645 "unmap": true, 00:20:53.645 "flush": true, 00:20:53.645 "reset": true, 00:20:53.645 "nvme_admin": false, 00:20:53.645 "nvme_io": false, 00:20:53.645 "nvme_io_md": false, 00:20:53.645 "write_zeroes": true, 00:20:53.645 "zcopy": false, 00:20:53.645 "get_zone_info": false, 00:20:53.645 "zone_management": false, 00:20:53.645 "zone_append": false, 00:20:53.645 "compare": false, 00:20:53.645 "compare_and_write": false, 00:20:53.645 "abort": false, 00:20:53.645 "seek_hole": false, 00:20:53.645 "seek_data": false, 00:20:53.645 "copy": false, 00:20:53.645 "nvme_iov_md": false 00:20:53.645 }, 00:20:53.645 "memory_domains": [ 00:20:53.645 { 00:20:53.645 "dma_device_id": "system", 00:20:53.645 "dma_device_type": 1 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.645 "dma_device_type": 2 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "dma_device_id": "system", 00:20:53.645 "dma_device_type": 1 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.645 "dma_device_type": 2 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "dma_device_id": "system", 00:20:53.645 "dma_device_type": 1 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.645 "dma_device_type": 2 00:20:53.645 } 00:20:53.645 ], 00:20:53.645 "driver_specific": { 00:20:53.645 "raid": { 00:20:53.645 "uuid": "39b85774-428f-11ef-a0af-c98d8ee52a94", 00:20:53.645 "strip_size_kb": 64, 00:20:53.645 "state": "online", 00:20:53.645 "raid_level": "raid0", 00:20:53.645 "superblock": true, 00:20:53.645 "num_base_bdevs": 3, 00:20:53.645 "num_base_bdevs_discovered": 3, 00:20:53.645 "num_base_bdevs_operational": 3, 00:20:53.645 "base_bdevs_list": [ 00:20:53.645 { 00:20:53.645 "name": "BaseBdev1", 00:20:53.645 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:53.645 "is_configured": true, 00:20:53.645 "data_offset": 2048, 00:20:53.645 "data_size": 63488 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "name": "BaseBdev2", 00:20:53.645 "uuid": "3a1f810f-428f-11ef-a0af-c98d8ee52a94", 00:20:53.645 "is_configured": true, 00:20:53.645 "data_offset": 2048, 00:20:53.645 "data_size": 63488 00:20:53.645 }, 00:20:53.645 { 00:20:53.645 "name": "BaseBdev3", 00:20:53.645 "uuid": "3abc6276-428f-11ef-a0af-c98d8ee52a94", 00:20:53.645 "is_configured": true, 00:20:53.645 "data_offset": 2048, 00:20:53.645 "data_size": 63488 00:20:53.645 } 00:20:53.645 ] 00:20:53.645 } 00:20:53.645 } 00:20:53.645 }' 00:20:53.645 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:53.645 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:53.645 BaseBdev2 00:20:53.645 BaseBdev3' 00:20:53.645 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:53.645 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:53.645 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:53.904 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:53.904 "name": "BaseBdev1", 00:20:53.904 "aliases": [ 00:20:53.904 "38f1f421-428f-11ef-a0af-c98d8ee52a94" 00:20:53.904 ], 00:20:53.904 "product_name": "Malloc disk", 00:20:53.904 "block_size": 512, 00:20:53.904 "num_blocks": 65536, 00:20:53.904 "uuid": "38f1f421-428f-11ef-a0af-c98d8ee52a94", 00:20:53.904 "assigned_rate_limits": { 00:20:53.904 "rw_ios_per_sec": 0, 00:20:53.904 "rw_mbytes_per_sec": 0, 00:20:53.904 "r_mbytes_per_sec": 0, 00:20:53.904 "w_mbytes_per_sec": 0 00:20:53.904 }, 00:20:53.904 "claimed": true, 00:20:53.904 "claim_type": "exclusive_write", 00:20:53.904 "zoned": false, 00:20:53.904 "supported_io_types": { 00:20:53.904 "read": true, 00:20:53.904 "write": true, 00:20:53.904 "unmap": true, 00:20:53.905 "flush": true, 00:20:53.905 "reset": true, 00:20:53.905 "nvme_admin": false, 00:20:53.905 "nvme_io": false, 00:20:53.905 "nvme_io_md": false, 00:20:53.905 "write_zeroes": true, 00:20:53.905 "zcopy": true, 00:20:53.905 "get_zone_info": false, 00:20:53.905 "zone_management": false, 00:20:53.905 "zone_append": false, 00:20:53.905 "compare": false, 00:20:53.905 "compare_and_write": false, 00:20:53.905 "abort": true, 00:20:53.905 "seek_hole": false, 00:20:53.905 "seek_data": false, 00:20:53.905 "copy": true, 00:20:53.905 "nvme_iov_md": false 00:20:53.905 }, 00:20:53.905 "memory_domains": [ 00:20:53.905 { 00:20:53.905 "dma_device_id": "system", 00:20:53.905 "dma_device_type": 1 00:20:53.905 }, 00:20:53.905 { 00:20:53.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.905 "dma_device_type": 2 00:20:53.905 } 00:20:53.905 ], 00:20:53.905 "driver_specific": {} 00:20:53.905 }' 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:53.905 09:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:54.163 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:54.163 "name": "BaseBdev2", 00:20:54.163 "aliases": [ 00:20:54.163 "3a1f810f-428f-11ef-a0af-c98d8ee52a94" 00:20:54.163 ], 00:20:54.163 "product_name": "Malloc disk", 00:20:54.163 "block_size": 512, 00:20:54.163 "num_blocks": 65536, 00:20:54.163 "uuid": "3a1f810f-428f-11ef-a0af-c98d8ee52a94", 00:20:54.163 "assigned_rate_limits": { 00:20:54.163 "rw_ios_per_sec": 0, 00:20:54.163 "rw_mbytes_per_sec": 0, 00:20:54.163 "r_mbytes_per_sec": 0, 00:20:54.163 "w_mbytes_per_sec": 0 00:20:54.163 }, 00:20:54.163 "claimed": true, 00:20:54.163 "claim_type": "exclusive_write", 00:20:54.163 "zoned": false, 00:20:54.163 "supported_io_types": { 00:20:54.163 "read": true, 00:20:54.163 "write": true, 00:20:54.163 "unmap": true, 00:20:54.163 "flush": true, 00:20:54.163 "reset": true, 00:20:54.163 "nvme_admin": false, 00:20:54.163 "nvme_io": false, 00:20:54.163 "nvme_io_md": false, 00:20:54.163 "write_zeroes": true, 00:20:54.163 "zcopy": true, 00:20:54.163 "get_zone_info": false, 00:20:54.163 "zone_management": false, 00:20:54.163 "zone_append": false, 00:20:54.163 "compare": false, 00:20:54.163 "compare_and_write": false, 00:20:54.163 "abort": true, 00:20:54.163 "seek_hole": false, 00:20:54.163 "seek_data": false, 00:20:54.163 "copy": true, 00:20:54.163 "nvme_iov_md": false 00:20:54.163 }, 00:20:54.163 "memory_domains": [ 00:20:54.163 { 00:20:54.163 "dma_device_id": "system", 00:20:54.163 "dma_device_type": 1 00:20:54.163 }, 00:20:54.163 { 00:20:54.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.164 "dma_device_type": 2 00:20:54.164 } 00:20:54.164 ], 00:20:54.164 "driver_specific": {} 00:20:54.164 }' 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:54.164 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:54.423 "name": "BaseBdev3", 00:20:54.423 "aliases": [ 00:20:54.423 "3abc6276-428f-11ef-a0af-c98d8ee52a94" 00:20:54.423 ], 00:20:54.423 "product_name": "Malloc disk", 00:20:54.423 "block_size": 512, 00:20:54.423 "num_blocks": 65536, 00:20:54.423 "uuid": "3abc6276-428f-11ef-a0af-c98d8ee52a94", 00:20:54.423 "assigned_rate_limits": { 00:20:54.423 "rw_ios_per_sec": 0, 00:20:54.423 "rw_mbytes_per_sec": 0, 00:20:54.423 "r_mbytes_per_sec": 0, 00:20:54.423 "w_mbytes_per_sec": 0 00:20:54.423 }, 00:20:54.423 "claimed": true, 00:20:54.423 "claim_type": "exclusive_write", 00:20:54.423 "zoned": false, 00:20:54.423 "supported_io_types": { 00:20:54.423 "read": true, 00:20:54.423 "write": true, 00:20:54.423 "unmap": true, 00:20:54.423 "flush": true, 00:20:54.423 "reset": true, 00:20:54.423 "nvme_admin": false, 00:20:54.423 "nvme_io": false, 00:20:54.423 "nvme_io_md": false, 00:20:54.423 "write_zeroes": true, 00:20:54.423 "zcopy": true, 00:20:54.423 "get_zone_info": false, 00:20:54.423 "zone_management": false, 00:20:54.423 "zone_append": false, 00:20:54.423 "compare": false, 00:20:54.423 "compare_and_write": false, 00:20:54.423 "abort": true, 00:20:54.423 "seek_hole": false, 00:20:54.423 "seek_data": false, 00:20:54.423 "copy": true, 00:20:54.423 "nvme_iov_md": false 00:20:54.423 }, 00:20:54.423 "memory_domains": [ 00:20:54.423 { 00:20:54.423 "dma_device_id": "system", 00:20:54.423 "dma_device_type": 1 00:20:54.423 }, 00:20:54.423 { 00:20:54.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.423 "dma_device_type": 2 00:20:54.423 } 00:20:54.423 ], 00:20:54.423 "driver_specific": {} 00:20:54.423 }' 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.423 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:54.682 [2024-07-15 09:47:22.735763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.682 [2024-07-15 09:47:22.735795] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:54.682 [2024-07-15 09:47:22.735816] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.682 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.940 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.940 "name": "Existed_Raid", 00:20:54.940 "uuid": "39b85774-428f-11ef-a0af-c98d8ee52a94", 00:20:54.940 "strip_size_kb": 64, 00:20:54.940 "state": "offline", 00:20:54.940 "raid_level": "raid0", 00:20:54.940 "superblock": true, 00:20:54.940 "num_base_bdevs": 3, 00:20:54.940 "num_base_bdevs_discovered": 2, 00:20:54.940 "num_base_bdevs_operational": 2, 00:20:54.940 "base_bdevs_list": [ 00:20:54.940 { 00:20:54.940 "name": null, 00:20:54.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.940 "is_configured": false, 00:20:54.940 "data_offset": 2048, 00:20:54.940 "data_size": 63488 00:20:54.940 }, 00:20:54.940 { 00:20:54.940 "name": "BaseBdev2", 00:20:54.940 "uuid": "3a1f810f-428f-11ef-a0af-c98d8ee52a94", 00:20:54.940 "is_configured": true, 00:20:54.940 "data_offset": 2048, 00:20:54.940 "data_size": 63488 00:20:54.940 }, 00:20:54.940 { 00:20:54.940 "name": "BaseBdev3", 00:20:54.940 "uuid": "3abc6276-428f-11ef-a0af-c98d8ee52a94", 00:20:54.940 "is_configured": true, 00:20:54.940 "data_offset": 2048, 00:20:54.940 "data_size": 63488 00:20:54.940 } 00:20:54.940 ] 00:20:54.940 }' 00:20:54.940 09:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.940 09:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.199 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:55.199 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:55.199 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.199 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:55.458 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:55.458 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:55.458 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:55.717 [2024-07-15 09:47:23.636481] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:55.717 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:55.717 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:55.717 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.717 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:55.976 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:55.976 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:55.977 09:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:55.977 [2024-07-15 09:47:24.056918] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:55.977 [2024-07-15 09:47:24.056948] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d7503c34a00 name Existed_Raid, state offline 00:20:55.977 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:55.977 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:56.236 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:56.495 BaseBdev2 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:56.495 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.754 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:57.013 [ 00:20:57.013 { 00:20:57.013 "name": "BaseBdev2", 00:20:57.013 "aliases": [ 00:20:57.013 "3d1f0827-428f-11ef-a0af-c98d8ee52a94" 00:20:57.013 ], 00:20:57.013 "product_name": "Malloc disk", 00:20:57.013 "block_size": 512, 00:20:57.013 "num_blocks": 65536, 00:20:57.013 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:20:57.013 "assigned_rate_limits": { 00:20:57.013 "rw_ios_per_sec": 0, 00:20:57.013 "rw_mbytes_per_sec": 0, 00:20:57.013 "r_mbytes_per_sec": 0, 00:20:57.013 "w_mbytes_per_sec": 0 00:20:57.013 }, 00:20:57.013 "claimed": false, 00:20:57.013 "zoned": false, 00:20:57.013 "supported_io_types": { 00:20:57.013 "read": true, 00:20:57.013 "write": true, 00:20:57.013 "unmap": true, 00:20:57.013 "flush": true, 00:20:57.013 "reset": true, 00:20:57.013 "nvme_admin": false, 00:20:57.013 "nvme_io": false, 00:20:57.013 "nvme_io_md": false, 00:20:57.013 "write_zeroes": true, 00:20:57.013 "zcopy": true, 00:20:57.013 "get_zone_info": false, 00:20:57.013 "zone_management": false, 00:20:57.013 "zone_append": false, 00:20:57.013 "compare": false, 00:20:57.013 "compare_and_write": false, 00:20:57.013 "abort": true, 00:20:57.013 "seek_hole": false, 00:20:57.013 "seek_data": false, 00:20:57.013 "copy": true, 00:20:57.013 "nvme_iov_md": false 00:20:57.013 }, 00:20:57.013 "memory_domains": [ 00:20:57.013 { 00:20:57.013 "dma_device_id": "system", 00:20:57.013 "dma_device_type": 1 00:20:57.013 }, 00:20:57.013 { 00:20:57.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.013 "dma_device_type": 2 00:20:57.013 } 00:20:57.013 ], 00:20:57.013 "driver_specific": {} 00:20:57.013 } 00:20:57.013 ] 00:20:57.013 09:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:57.013 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:57.013 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:57.013 09:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:57.270 BaseBdev3 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.270 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:57.529 [ 00:20:57.529 { 00:20:57.529 "name": "BaseBdev3", 00:20:57.529 "aliases": [ 00:20:57.529 "3d7d08ce-428f-11ef-a0af-c98d8ee52a94" 00:20:57.529 ], 00:20:57.529 "product_name": "Malloc disk", 00:20:57.529 "block_size": 512, 00:20:57.529 "num_blocks": 65536, 00:20:57.529 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:20:57.529 "assigned_rate_limits": { 00:20:57.529 "rw_ios_per_sec": 0, 00:20:57.529 "rw_mbytes_per_sec": 0, 00:20:57.529 "r_mbytes_per_sec": 0, 00:20:57.529 "w_mbytes_per_sec": 0 00:20:57.529 }, 00:20:57.529 "claimed": false, 00:20:57.529 "zoned": false, 00:20:57.529 "supported_io_types": { 00:20:57.529 "read": true, 00:20:57.529 "write": true, 00:20:57.529 "unmap": true, 00:20:57.529 "flush": true, 00:20:57.529 "reset": true, 00:20:57.529 "nvme_admin": false, 00:20:57.529 "nvme_io": false, 00:20:57.529 "nvme_io_md": false, 00:20:57.529 "write_zeroes": true, 00:20:57.529 "zcopy": true, 00:20:57.529 "get_zone_info": false, 00:20:57.529 "zone_management": false, 00:20:57.529 "zone_append": false, 00:20:57.529 "compare": false, 00:20:57.529 "compare_and_write": false, 00:20:57.529 "abort": true, 00:20:57.529 "seek_hole": false, 00:20:57.529 "seek_data": false, 00:20:57.529 "copy": true, 00:20:57.529 "nvme_iov_md": false 00:20:57.529 }, 00:20:57.529 "memory_domains": [ 00:20:57.529 { 00:20:57.529 "dma_device_id": "system", 00:20:57.529 "dma_device_type": 1 00:20:57.529 }, 00:20:57.529 { 00:20:57.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.529 "dma_device_type": 2 00:20:57.529 } 00:20:57.529 ], 00:20:57.529 "driver_specific": {} 00:20:57.529 } 00:20:57.529 ] 00:20:57.529 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:57.529 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:57.529 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:57.529 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:57.788 [2024-07-15 09:47:25.717631] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:57.788 [2024-07-15 09:47:25.717697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:57.788 [2024-07-15 09:47:25.717704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.788 [2024-07-15 09:47:25.718311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.788 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.046 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.046 "name": "Existed_Raid", 00:20:58.046 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:20:58.046 "strip_size_kb": 64, 00:20:58.046 "state": "configuring", 00:20:58.046 "raid_level": "raid0", 00:20:58.046 "superblock": true, 00:20:58.046 "num_base_bdevs": 3, 00:20:58.046 "num_base_bdevs_discovered": 2, 00:20:58.046 "num_base_bdevs_operational": 3, 00:20:58.046 "base_bdevs_list": [ 00:20:58.046 { 00:20:58.046 "name": "BaseBdev1", 00:20:58.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.046 "is_configured": false, 00:20:58.046 "data_offset": 0, 00:20:58.046 "data_size": 0 00:20:58.046 }, 00:20:58.046 { 00:20:58.046 "name": "BaseBdev2", 00:20:58.046 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:20:58.046 "is_configured": true, 00:20:58.046 "data_offset": 2048, 00:20:58.046 "data_size": 63488 00:20:58.046 }, 00:20:58.046 { 00:20:58.046 "name": "BaseBdev3", 00:20:58.046 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:20:58.046 "is_configured": true, 00:20:58.046 "data_offset": 2048, 00:20:58.046 "data_size": 63488 00:20:58.046 } 00:20:58.046 ] 00:20:58.046 }' 00:20:58.046 09:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.046 09:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.305 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:58.564 [2024-07-15 09:47:26.413670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:58.564 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:58.564 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.565 "name": "Existed_Raid", 00:20:58.565 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:20:58.565 "strip_size_kb": 64, 00:20:58.565 "state": "configuring", 00:20:58.565 "raid_level": "raid0", 00:20:58.565 "superblock": true, 00:20:58.565 "num_base_bdevs": 3, 00:20:58.565 "num_base_bdevs_discovered": 1, 00:20:58.565 "num_base_bdevs_operational": 3, 00:20:58.565 "base_bdevs_list": [ 00:20:58.565 { 00:20:58.565 "name": "BaseBdev1", 00:20:58.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.565 "is_configured": false, 00:20:58.565 "data_offset": 0, 00:20:58.565 "data_size": 0 00:20:58.565 }, 00:20:58.565 { 00:20:58.565 "name": null, 00:20:58.565 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:20:58.565 "is_configured": false, 00:20:58.565 "data_offset": 2048, 00:20:58.565 "data_size": 63488 00:20:58.565 }, 00:20:58.565 { 00:20:58.565 "name": "BaseBdev3", 00:20:58.565 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:20:58.565 "is_configured": true, 00:20:58.565 "data_offset": 2048, 00:20:58.565 "data_size": 63488 00:20:58.565 } 00:20:58.565 ] 00:20:58.565 }' 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.565 09:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.824 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.824 09:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:59.088 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:59.088 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:59.346 [2024-07-15 09:47:27.301875] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.346 BaseBdev1 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:59.346 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:59.604 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:59.863 [ 00:20:59.863 { 00:20:59.863 "name": "BaseBdev1", 00:20:59.863 "aliases": [ 00:20:59.863 "3ecaecb0-428f-11ef-a0af-c98d8ee52a94" 00:20:59.863 ], 00:20:59.863 "product_name": "Malloc disk", 00:20:59.863 "block_size": 512, 00:20:59.864 "num_blocks": 65536, 00:20:59.864 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:20:59.864 "assigned_rate_limits": { 00:20:59.864 "rw_ios_per_sec": 0, 00:20:59.864 "rw_mbytes_per_sec": 0, 00:20:59.864 "r_mbytes_per_sec": 0, 00:20:59.864 "w_mbytes_per_sec": 0 00:20:59.864 }, 00:20:59.864 "claimed": true, 00:20:59.864 "claim_type": "exclusive_write", 00:20:59.864 "zoned": false, 00:20:59.864 "supported_io_types": { 00:20:59.864 "read": true, 00:20:59.864 "write": true, 00:20:59.864 "unmap": true, 00:20:59.864 "flush": true, 00:20:59.864 "reset": true, 00:20:59.864 "nvme_admin": false, 00:20:59.864 "nvme_io": false, 00:20:59.864 "nvme_io_md": false, 00:20:59.864 "write_zeroes": true, 00:20:59.864 "zcopy": true, 00:20:59.864 "get_zone_info": false, 00:20:59.864 "zone_management": false, 00:20:59.864 "zone_append": false, 00:20:59.864 "compare": false, 00:20:59.864 "compare_and_write": false, 00:20:59.864 "abort": true, 00:20:59.864 "seek_hole": false, 00:20:59.864 "seek_data": false, 00:20:59.864 "copy": true, 00:20:59.864 "nvme_iov_md": false 00:20:59.864 }, 00:20:59.864 "memory_domains": [ 00:20:59.864 { 00:20:59.864 "dma_device_id": "system", 00:20:59.864 "dma_device_type": 1 00:20:59.864 }, 00:20:59.864 { 00:20:59.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.864 "dma_device_type": 2 00:20:59.864 } 00:20:59.864 ], 00:20:59.864 "driver_specific": {} 00:20:59.864 } 00:20:59.864 ] 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.864 "name": "Existed_Raid", 00:20:59.864 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:20:59.864 "strip_size_kb": 64, 00:20:59.864 "state": "configuring", 00:20:59.864 "raid_level": "raid0", 00:20:59.864 "superblock": true, 00:20:59.864 "num_base_bdevs": 3, 00:20:59.864 "num_base_bdevs_discovered": 2, 00:20:59.864 "num_base_bdevs_operational": 3, 00:20:59.864 "base_bdevs_list": [ 00:20:59.864 { 00:20:59.864 "name": "BaseBdev1", 00:20:59.864 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:20:59.864 "is_configured": true, 00:20:59.864 "data_offset": 2048, 00:20:59.864 "data_size": 63488 00:20:59.864 }, 00:20:59.864 { 00:20:59.864 "name": null, 00:20:59.864 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:20:59.864 "is_configured": false, 00:20:59.864 "data_offset": 2048, 00:20:59.864 "data_size": 63488 00:20:59.864 }, 00:20:59.864 { 00:20:59.864 "name": "BaseBdev3", 00:20:59.864 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:20:59.864 "is_configured": true, 00:20:59.864 "data_offset": 2048, 00:20:59.864 "data_size": 63488 00:20:59.864 } 00:20:59.864 ] 00:20:59.864 }' 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.864 09:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.432 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.432 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:00.432 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:00.432 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:00.689 [2024-07-15 09:47:28.633871] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.689 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.947 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.947 "name": "Existed_Raid", 00:21:00.947 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:00.947 "strip_size_kb": 64, 00:21:00.947 "state": "configuring", 00:21:00.947 "raid_level": "raid0", 00:21:00.947 "superblock": true, 00:21:00.947 "num_base_bdevs": 3, 00:21:00.947 "num_base_bdevs_discovered": 1, 00:21:00.947 "num_base_bdevs_operational": 3, 00:21:00.947 "base_bdevs_list": [ 00:21:00.947 { 00:21:00.947 "name": "BaseBdev1", 00:21:00.947 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:00.947 "is_configured": true, 00:21:00.947 "data_offset": 2048, 00:21:00.947 "data_size": 63488 00:21:00.947 }, 00:21:00.947 { 00:21:00.947 "name": null, 00:21:00.947 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:00.947 "is_configured": false, 00:21:00.947 "data_offset": 2048, 00:21:00.947 "data_size": 63488 00:21:00.947 }, 00:21:00.947 { 00:21:00.947 "name": null, 00:21:00.947 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:00.947 "is_configured": false, 00:21:00.947 "data_offset": 2048, 00:21:00.947 "data_size": 63488 00:21:00.947 } 00:21:00.947 ] 00:21:00.947 }' 00:21:00.947 09:47:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.947 09:47:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.206 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.206 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:01.465 [2024-07-15 09:47:29.509965] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.465 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.725 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.725 "name": "Existed_Raid", 00:21:01.725 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:01.725 "strip_size_kb": 64, 00:21:01.725 "state": "configuring", 00:21:01.725 "raid_level": "raid0", 00:21:01.725 "superblock": true, 00:21:01.725 "num_base_bdevs": 3, 00:21:01.725 "num_base_bdevs_discovered": 2, 00:21:01.725 "num_base_bdevs_operational": 3, 00:21:01.725 "base_bdevs_list": [ 00:21:01.725 { 00:21:01.725 "name": "BaseBdev1", 00:21:01.725 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:01.725 "is_configured": true, 00:21:01.725 "data_offset": 2048, 00:21:01.725 "data_size": 63488 00:21:01.725 }, 00:21:01.725 { 00:21:01.725 "name": null, 00:21:01.725 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:01.725 "is_configured": false, 00:21:01.725 "data_offset": 2048, 00:21:01.725 "data_size": 63488 00:21:01.725 }, 00:21:01.725 { 00:21:01.725 "name": "BaseBdev3", 00:21:01.725 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:01.725 "is_configured": true, 00:21:01.725 "data_offset": 2048, 00:21:01.725 "data_size": 63488 00:21:01.725 } 00:21:01.725 ] 00:21:01.725 }' 00:21:01.725 09:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.725 09:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.983 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.983 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:02.241 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:02.241 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:02.500 [2024-07-15 09:47:30.394079] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.500 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.500 "name": "Existed_Raid", 00:21:02.500 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:02.500 "strip_size_kb": 64, 00:21:02.500 "state": "configuring", 00:21:02.500 "raid_level": "raid0", 00:21:02.500 "superblock": true, 00:21:02.500 "num_base_bdevs": 3, 00:21:02.500 "num_base_bdevs_discovered": 1, 00:21:02.500 "num_base_bdevs_operational": 3, 00:21:02.500 "base_bdevs_list": [ 00:21:02.500 { 00:21:02.500 "name": null, 00:21:02.500 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:02.500 "is_configured": false, 00:21:02.500 "data_offset": 2048, 00:21:02.500 "data_size": 63488 00:21:02.500 }, 00:21:02.500 { 00:21:02.500 "name": null, 00:21:02.500 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:02.500 "is_configured": false, 00:21:02.500 "data_offset": 2048, 00:21:02.500 "data_size": 63488 00:21:02.500 }, 00:21:02.500 { 00:21:02.500 "name": "BaseBdev3", 00:21:02.500 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:02.501 "is_configured": true, 00:21:02.501 "data_offset": 2048, 00:21:02.501 "data_size": 63488 00:21:02.501 } 00:21:02.501 ] 00:21:02.501 }' 00:21:02.501 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.501 09:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.069 09:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:03.069 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:03.069 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:03.327 [2024-07-15 09:47:31.306600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.327 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.586 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.586 "name": "Existed_Raid", 00:21:03.586 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:03.586 "strip_size_kb": 64, 00:21:03.586 "state": "configuring", 00:21:03.586 "raid_level": "raid0", 00:21:03.586 "superblock": true, 00:21:03.586 "num_base_bdevs": 3, 00:21:03.586 "num_base_bdevs_discovered": 2, 00:21:03.586 "num_base_bdevs_operational": 3, 00:21:03.586 "base_bdevs_list": [ 00:21:03.586 { 00:21:03.586 "name": null, 00:21:03.586 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:03.586 "is_configured": false, 00:21:03.586 "data_offset": 2048, 00:21:03.586 "data_size": 63488 00:21:03.586 }, 00:21:03.586 { 00:21:03.586 "name": "BaseBdev2", 00:21:03.586 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:03.586 "is_configured": true, 00:21:03.586 "data_offset": 2048, 00:21:03.586 "data_size": 63488 00:21:03.586 }, 00:21:03.586 { 00:21:03.586 "name": "BaseBdev3", 00:21:03.586 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:03.586 "is_configured": true, 00:21:03.586 "data_offset": 2048, 00:21:03.586 "data_size": 63488 00:21:03.586 } 00:21:03.586 ] 00:21:03.586 }' 00:21:03.586 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.586 09:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.153 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:04.153 09:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.411 09:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:04.411 09:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.411 09:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:04.411 09:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3ecaecb0-428f-11ef-a0af-c98d8ee52a94 00:21:04.670 [2024-07-15 09:47:32.662843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:04.670 [2024-07-15 09:47:32.662892] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d7503c34a00 00:21:04.670 [2024-07-15 09:47:32.662896] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:04.670 [2024-07-15 09:47:32.662929] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d7503c97e20 00:21:04.670 [2024-07-15 09:47:32.662976] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d7503c34a00 00:21:04.670 [2024-07-15 09:47:32.662979] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d7503c34a00 00:21:04.670 [2024-07-15 09:47:32.662995] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.670 NewBaseBdev 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.670 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.928 09:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:05.187 [ 00:21:05.187 { 00:21:05.187 "name": "NewBaseBdev", 00:21:05.187 "aliases": [ 00:21:05.187 "3ecaecb0-428f-11ef-a0af-c98d8ee52a94" 00:21:05.187 ], 00:21:05.187 "product_name": "Malloc disk", 00:21:05.187 "block_size": 512, 00:21:05.187 "num_blocks": 65536, 00:21:05.187 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:05.187 "assigned_rate_limits": { 00:21:05.187 "rw_ios_per_sec": 0, 00:21:05.187 "rw_mbytes_per_sec": 0, 00:21:05.187 "r_mbytes_per_sec": 0, 00:21:05.187 "w_mbytes_per_sec": 0 00:21:05.187 }, 00:21:05.187 "claimed": true, 00:21:05.187 "claim_type": "exclusive_write", 00:21:05.187 "zoned": false, 00:21:05.187 "supported_io_types": { 00:21:05.187 "read": true, 00:21:05.187 "write": true, 00:21:05.187 "unmap": true, 00:21:05.187 "flush": true, 00:21:05.187 "reset": true, 00:21:05.187 "nvme_admin": false, 00:21:05.187 "nvme_io": false, 00:21:05.187 "nvme_io_md": false, 00:21:05.187 "write_zeroes": true, 00:21:05.187 "zcopy": true, 00:21:05.187 "get_zone_info": false, 00:21:05.187 "zone_management": false, 00:21:05.187 "zone_append": false, 00:21:05.187 "compare": false, 00:21:05.187 "compare_and_write": false, 00:21:05.187 "abort": true, 00:21:05.187 "seek_hole": false, 00:21:05.187 "seek_data": false, 00:21:05.187 "copy": true, 00:21:05.187 "nvme_iov_md": false 00:21:05.187 }, 00:21:05.187 "memory_domains": [ 00:21:05.187 { 00:21:05.187 "dma_device_id": "system", 00:21:05.187 "dma_device_type": 1 00:21:05.187 }, 00:21:05.187 { 00:21:05.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.187 "dma_device_type": 2 00:21:05.187 } 00:21:05.187 ], 00:21:05.187 "driver_specific": {} 00:21:05.187 } 00:21:05.187 ] 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:05.187 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:05.188 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.188 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.188 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.188 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.188 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.188 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.446 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.446 "name": "Existed_Raid", 00:21:05.446 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:05.446 "strip_size_kb": 64, 00:21:05.446 "state": "online", 00:21:05.446 "raid_level": "raid0", 00:21:05.446 "superblock": true, 00:21:05.446 "num_base_bdevs": 3, 00:21:05.446 "num_base_bdevs_discovered": 3, 00:21:05.446 "num_base_bdevs_operational": 3, 00:21:05.446 "base_bdevs_list": [ 00:21:05.446 { 00:21:05.446 "name": "NewBaseBdev", 00:21:05.446 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:05.446 "is_configured": true, 00:21:05.446 "data_offset": 2048, 00:21:05.446 "data_size": 63488 00:21:05.446 }, 00:21:05.446 { 00:21:05.446 "name": "BaseBdev2", 00:21:05.446 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:05.446 "is_configured": true, 00:21:05.446 "data_offset": 2048, 00:21:05.446 "data_size": 63488 00:21:05.446 }, 00:21:05.446 { 00:21:05.446 "name": "BaseBdev3", 00:21:05.446 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:05.446 "is_configured": true, 00:21:05.446 "data_offset": 2048, 00:21:05.446 "data_size": 63488 00:21:05.446 } 00:21:05.446 ] 00:21:05.446 }' 00:21:05.446 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.446 09:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:05.705 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:05.962 [2024-07-15 09:47:33.902898] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.962 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:05.962 "name": "Existed_Raid", 00:21:05.962 "aliases": [ 00:21:05.962 "3dd934d0-428f-11ef-a0af-c98d8ee52a94" 00:21:05.962 ], 00:21:05.962 "product_name": "Raid Volume", 00:21:05.962 "block_size": 512, 00:21:05.962 "num_blocks": 190464, 00:21:05.962 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:05.962 "assigned_rate_limits": { 00:21:05.962 "rw_ios_per_sec": 0, 00:21:05.962 "rw_mbytes_per_sec": 0, 00:21:05.962 "r_mbytes_per_sec": 0, 00:21:05.962 "w_mbytes_per_sec": 0 00:21:05.962 }, 00:21:05.962 "claimed": false, 00:21:05.962 "zoned": false, 00:21:05.962 "supported_io_types": { 00:21:05.962 "read": true, 00:21:05.962 "write": true, 00:21:05.962 "unmap": true, 00:21:05.962 "flush": true, 00:21:05.962 "reset": true, 00:21:05.962 "nvme_admin": false, 00:21:05.962 "nvme_io": false, 00:21:05.962 "nvme_io_md": false, 00:21:05.962 "write_zeroes": true, 00:21:05.962 "zcopy": false, 00:21:05.962 "get_zone_info": false, 00:21:05.962 "zone_management": false, 00:21:05.962 "zone_append": false, 00:21:05.962 "compare": false, 00:21:05.962 "compare_and_write": false, 00:21:05.962 "abort": false, 00:21:05.962 "seek_hole": false, 00:21:05.962 "seek_data": false, 00:21:05.962 "copy": false, 00:21:05.962 "nvme_iov_md": false 00:21:05.962 }, 00:21:05.962 "memory_domains": [ 00:21:05.962 { 00:21:05.962 "dma_device_id": "system", 00:21:05.962 "dma_device_type": 1 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.962 "dma_device_type": 2 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "dma_device_id": "system", 00:21:05.962 "dma_device_type": 1 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.962 "dma_device_type": 2 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "dma_device_id": "system", 00:21:05.962 "dma_device_type": 1 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.962 "dma_device_type": 2 00:21:05.962 } 00:21:05.962 ], 00:21:05.962 "driver_specific": { 00:21:05.962 "raid": { 00:21:05.962 "uuid": "3dd934d0-428f-11ef-a0af-c98d8ee52a94", 00:21:05.962 "strip_size_kb": 64, 00:21:05.962 "state": "online", 00:21:05.962 "raid_level": "raid0", 00:21:05.962 "superblock": true, 00:21:05.962 "num_base_bdevs": 3, 00:21:05.962 "num_base_bdevs_discovered": 3, 00:21:05.962 "num_base_bdevs_operational": 3, 00:21:05.962 "base_bdevs_list": [ 00:21:05.962 { 00:21:05.962 "name": "NewBaseBdev", 00:21:05.962 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:05.962 "is_configured": true, 00:21:05.962 "data_offset": 2048, 00:21:05.962 "data_size": 63488 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "name": "BaseBdev2", 00:21:05.962 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:05.962 "is_configured": true, 00:21:05.962 "data_offset": 2048, 00:21:05.962 "data_size": 63488 00:21:05.962 }, 00:21:05.962 { 00:21:05.962 "name": "BaseBdev3", 00:21:05.962 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:05.962 "is_configured": true, 00:21:05.962 "data_offset": 2048, 00:21:05.962 "data_size": 63488 00:21:05.962 } 00:21:05.962 ] 00:21:05.962 } 00:21:05.962 } 00:21:05.962 }' 00:21:05.962 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:05.962 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:05.962 BaseBdev2 00:21:05.962 BaseBdev3' 00:21:05.962 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.962 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.962 09:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.219 "name": "NewBaseBdev", 00:21:06.219 "aliases": [ 00:21:06.219 "3ecaecb0-428f-11ef-a0af-c98d8ee52a94" 00:21:06.219 ], 00:21:06.219 "product_name": "Malloc disk", 00:21:06.219 "block_size": 512, 00:21:06.219 "num_blocks": 65536, 00:21:06.219 "uuid": "3ecaecb0-428f-11ef-a0af-c98d8ee52a94", 00:21:06.219 "assigned_rate_limits": { 00:21:06.219 "rw_ios_per_sec": 0, 00:21:06.219 "rw_mbytes_per_sec": 0, 00:21:06.219 "r_mbytes_per_sec": 0, 00:21:06.219 "w_mbytes_per_sec": 0 00:21:06.219 }, 00:21:06.219 "claimed": true, 00:21:06.219 "claim_type": "exclusive_write", 00:21:06.219 "zoned": false, 00:21:06.219 "supported_io_types": { 00:21:06.219 "read": true, 00:21:06.219 "write": true, 00:21:06.219 "unmap": true, 00:21:06.219 "flush": true, 00:21:06.219 "reset": true, 00:21:06.219 "nvme_admin": false, 00:21:06.219 "nvme_io": false, 00:21:06.219 "nvme_io_md": false, 00:21:06.219 "write_zeroes": true, 00:21:06.219 "zcopy": true, 00:21:06.219 "get_zone_info": false, 00:21:06.219 "zone_management": false, 00:21:06.219 "zone_append": false, 00:21:06.219 "compare": false, 00:21:06.219 "compare_and_write": false, 00:21:06.219 "abort": true, 00:21:06.219 "seek_hole": false, 00:21:06.219 "seek_data": false, 00:21:06.219 "copy": true, 00:21:06.219 "nvme_iov_md": false 00:21:06.219 }, 00:21:06.219 "memory_domains": [ 00:21:06.219 { 00:21:06.219 "dma_device_id": "system", 00:21:06.219 "dma_device_type": 1 00:21:06.219 }, 00:21:06.219 { 00:21:06.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.219 "dma_device_type": 2 00:21:06.219 } 00:21:06.219 ], 00:21:06.219 "driver_specific": {} 00:21:06.219 }' 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:06.219 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.477 "name": "BaseBdev2", 00:21:06.477 "aliases": [ 00:21:06.477 "3d1f0827-428f-11ef-a0af-c98d8ee52a94" 00:21:06.477 ], 00:21:06.477 "product_name": "Malloc disk", 00:21:06.477 "block_size": 512, 00:21:06.477 "num_blocks": 65536, 00:21:06.477 "uuid": "3d1f0827-428f-11ef-a0af-c98d8ee52a94", 00:21:06.477 "assigned_rate_limits": { 00:21:06.477 "rw_ios_per_sec": 0, 00:21:06.477 "rw_mbytes_per_sec": 0, 00:21:06.477 "r_mbytes_per_sec": 0, 00:21:06.477 "w_mbytes_per_sec": 0 00:21:06.477 }, 00:21:06.477 "claimed": true, 00:21:06.477 "claim_type": "exclusive_write", 00:21:06.477 "zoned": false, 00:21:06.477 "supported_io_types": { 00:21:06.477 "read": true, 00:21:06.477 "write": true, 00:21:06.477 "unmap": true, 00:21:06.477 "flush": true, 00:21:06.477 "reset": true, 00:21:06.477 "nvme_admin": false, 00:21:06.477 "nvme_io": false, 00:21:06.477 "nvme_io_md": false, 00:21:06.477 "write_zeroes": true, 00:21:06.477 "zcopy": true, 00:21:06.477 "get_zone_info": false, 00:21:06.477 "zone_management": false, 00:21:06.477 "zone_append": false, 00:21:06.477 "compare": false, 00:21:06.477 "compare_and_write": false, 00:21:06.477 "abort": true, 00:21:06.477 "seek_hole": false, 00:21:06.477 "seek_data": false, 00:21:06.477 "copy": true, 00:21:06.477 "nvme_iov_md": false 00:21:06.477 }, 00:21:06.477 "memory_domains": [ 00:21:06.477 { 00:21:06.477 "dma_device_id": "system", 00:21:06.477 "dma_device_type": 1 00:21:06.477 }, 00:21:06.477 { 00:21:06.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.477 "dma_device_type": 2 00:21:06.477 } 00:21:06.477 ], 00:21:06.477 "driver_specific": {} 00:21:06.477 }' 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:06.477 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.734 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.734 "name": "BaseBdev3", 00:21:06.734 "aliases": [ 00:21:06.734 "3d7d08ce-428f-11ef-a0af-c98d8ee52a94" 00:21:06.734 ], 00:21:06.734 "product_name": "Malloc disk", 00:21:06.734 "block_size": 512, 00:21:06.734 "num_blocks": 65536, 00:21:06.734 "uuid": "3d7d08ce-428f-11ef-a0af-c98d8ee52a94", 00:21:06.734 "assigned_rate_limits": { 00:21:06.734 "rw_ios_per_sec": 0, 00:21:06.734 "rw_mbytes_per_sec": 0, 00:21:06.734 "r_mbytes_per_sec": 0, 00:21:06.734 "w_mbytes_per_sec": 0 00:21:06.734 }, 00:21:06.734 "claimed": true, 00:21:06.734 "claim_type": "exclusive_write", 00:21:06.734 "zoned": false, 00:21:06.734 "supported_io_types": { 00:21:06.734 "read": true, 00:21:06.734 "write": true, 00:21:06.734 "unmap": true, 00:21:06.734 "flush": true, 00:21:06.734 "reset": true, 00:21:06.734 "nvme_admin": false, 00:21:06.734 "nvme_io": false, 00:21:06.734 "nvme_io_md": false, 00:21:06.734 "write_zeroes": true, 00:21:06.734 "zcopy": true, 00:21:06.734 "get_zone_info": false, 00:21:06.734 "zone_management": false, 00:21:06.734 "zone_append": false, 00:21:06.734 "compare": false, 00:21:06.734 "compare_and_write": false, 00:21:06.734 "abort": true, 00:21:06.734 "seek_hole": false, 00:21:06.734 "seek_data": false, 00:21:06.734 "copy": true, 00:21:06.734 "nvme_iov_md": false 00:21:06.734 }, 00:21:06.734 "memory_domains": [ 00:21:06.734 { 00:21:06.734 "dma_device_id": "system", 00:21:06.734 "dma_device_type": 1 00:21:06.734 }, 00:21:06.734 { 00:21:06.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.735 "dma_device_type": 2 00:21:06.735 } 00:21:06.735 ], 00:21:06.735 "driver_specific": {} 00:21:06.735 }' 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.735 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.992 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.992 09:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:06.992 [2024-07-15 09:47:35.055018] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:06.992 [2024-07-15 09:47:35.055047] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.992 [2024-07-15 09:47:35.055060] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.992 [2024-07-15 09:47:35.055072] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.992 [2024-07-15 09:47:35.055076] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d7503c34a00 name Existed_Raid, state offline 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52641 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52641 ']' 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52641 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52641 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:21:06.992 killing process with pid 52641 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52641' 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52641 00:21:06.992 [2024-07-15 09:47:35.088726] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:06.992 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52641 00:21:07.251 [2024-07-15 09:47:35.115143] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:07.510 09:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:07.510 00:21:07.510 real 0m20.180s 00:21:07.510 user 0m35.794s 00:21:07.510 sys 0m3.797s 00:21:07.510 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.510 09:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.510 ************************************ 00:21:07.510 END TEST raid_state_function_test_sb 00:21:07.510 ************************************ 00:21:07.510 09:47:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:07.510 09:47:35 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:21:07.510 09:47:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:07.510 09:47:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.510 09:47:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.510 ************************************ 00:21:07.510 START TEST raid_superblock_test 00:21:07.510 ************************************ 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53353 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53353 /var/tmp/spdk-raid.sock 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53353 ']' 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.510 09:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.510 [2024-07-15 09:47:35.437633] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:07.510 [2024-07-15 09:47:35.437926] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:08.076 EAL: TSC is not safe to use in SMP mode 00:21:08.076 EAL: TSC is not invariant 00:21:08.076 [2024-07-15 09:47:36.151520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.334 [2024-07-15 09:47:36.267102] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:08.334 [2024-07-15 09:47:36.269628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.334 [2024-07-15 09:47:36.270377] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.334 [2024-07-15 09:47:36.270388] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.334 09:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.334 09:47:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:08.334 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:08.334 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:08.335 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:08.594 malloc1 00:21:08.594 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.854 [2024-07-15 09:47:36.765735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.854 [2024-07-15 09:47:36.765806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.854 [2024-07-15 09:47:36.765815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4634780 00:21:08.854 [2024-07-15 09:47:36.765822] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.854 [2024-07-15 09:47:36.766787] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.854 [2024-07-15 09:47:36.766818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.854 pt1 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:08.854 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:09.113 malloc2 00:21:09.113 09:47:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.113 [2024-07-15 09:47:37.177760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.113 [2024-07-15 09:47:37.177828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.113 [2024-07-15 09:47:37.177837] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4634c80 00:21:09.113 [2024-07-15 09:47:37.177844] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.113 [2024-07-15 09:47:37.178515] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.113 [2024-07-15 09:47:37.178542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.113 pt2 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:09.113 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:09.376 malloc3 00:21:09.376 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:09.672 [2024-07-15 09:47:37.617786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:09.672 [2024-07-15 09:47:37.617852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.672 [2024-07-15 09:47:37.617861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4635180 00:21:09.672 [2024-07-15 09:47:37.617868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.672 [2024-07-15 09:47:37.618532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.672 [2024-07-15 09:47:37.618561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:09.672 pt3 00:21:09.672 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:09.672 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:09.672 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:09.930 [2024-07-15 09:47:37.825810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:09.931 [2024-07-15 09:47:37.826429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.931 [2024-07-15 09:47:37.826451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:09.931 [2024-07-15 09:47:37.826493] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15aef4635400 00:21:09.931 [2024-07-15 09:47:37.826498] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:09.931 [2024-07-15 09:47:37.826530] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15aef4697e20 00:21:09.931 [2024-07-15 09:47:37.826603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15aef4635400 00:21:09.931 [2024-07-15 09:47:37.826606] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15aef4635400 00:21:09.931 [2024-07-15 09:47:37.826627] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.931 09:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.190 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.190 "name": "raid_bdev1", 00:21:10.190 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:10.190 "strip_size_kb": 64, 00:21:10.190 "state": "online", 00:21:10.190 "raid_level": "raid0", 00:21:10.190 "superblock": true, 00:21:10.190 "num_base_bdevs": 3, 00:21:10.190 "num_base_bdevs_discovered": 3, 00:21:10.190 "num_base_bdevs_operational": 3, 00:21:10.190 "base_bdevs_list": [ 00:21:10.190 { 00:21:10.190 "name": "pt1", 00:21:10.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.190 "is_configured": true, 00:21:10.190 "data_offset": 2048, 00:21:10.190 "data_size": 63488 00:21:10.190 }, 00:21:10.190 { 00:21:10.190 "name": "pt2", 00:21:10.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.190 "is_configured": true, 00:21:10.190 "data_offset": 2048, 00:21:10.190 "data_size": 63488 00:21:10.190 }, 00:21:10.190 { 00:21:10.190 "name": "pt3", 00:21:10.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.190 "is_configured": true, 00:21:10.190 "data_offset": 2048, 00:21:10.190 "data_size": 63488 00:21:10.190 } 00:21:10.190 ] 00:21:10.190 }' 00:21:10.190 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.190 09:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.451 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:10.451 [2024-07-15 09:47:38.549882] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:10.711 "name": "raid_bdev1", 00:21:10.711 "aliases": [ 00:21:10.711 "4510c456-428f-11ef-a0af-c98d8ee52a94" 00:21:10.711 ], 00:21:10.711 "product_name": "Raid Volume", 00:21:10.711 "block_size": 512, 00:21:10.711 "num_blocks": 190464, 00:21:10.711 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:10.711 "assigned_rate_limits": { 00:21:10.711 "rw_ios_per_sec": 0, 00:21:10.711 "rw_mbytes_per_sec": 0, 00:21:10.711 "r_mbytes_per_sec": 0, 00:21:10.711 "w_mbytes_per_sec": 0 00:21:10.711 }, 00:21:10.711 "claimed": false, 00:21:10.711 "zoned": false, 00:21:10.711 "supported_io_types": { 00:21:10.711 "read": true, 00:21:10.711 "write": true, 00:21:10.711 "unmap": true, 00:21:10.711 "flush": true, 00:21:10.711 "reset": true, 00:21:10.711 "nvme_admin": false, 00:21:10.711 "nvme_io": false, 00:21:10.711 "nvme_io_md": false, 00:21:10.711 "write_zeroes": true, 00:21:10.711 "zcopy": false, 00:21:10.711 "get_zone_info": false, 00:21:10.711 "zone_management": false, 00:21:10.711 "zone_append": false, 00:21:10.711 "compare": false, 00:21:10.711 "compare_and_write": false, 00:21:10.711 "abort": false, 00:21:10.711 "seek_hole": false, 00:21:10.711 "seek_data": false, 00:21:10.711 "copy": false, 00:21:10.711 "nvme_iov_md": false 00:21:10.711 }, 00:21:10.711 "memory_domains": [ 00:21:10.711 { 00:21:10.711 "dma_device_id": "system", 00:21:10.711 "dma_device_type": 1 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.711 "dma_device_type": 2 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "dma_device_id": "system", 00:21:10.711 "dma_device_type": 1 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.711 "dma_device_type": 2 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "dma_device_id": "system", 00:21:10.711 "dma_device_type": 1 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.711 "dma_device_type": 2 00:21:10.711 } 00:21:10.711 ], 00:21:10.711 "driver_specific": { 00:21:10.711 "raid": { 00:21:10.711 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:10.711 "strip_size_kb": 64, 00:21:10.711 "state": "online", 00:21:10.711 "raid_level": "raid0", 00:21:10.711 "superblock": true, 00:21:10.711 "num_base_bdevs": 3, 00:21:10.711 "num_base_bdevs_discovered": 3, 00:21:10.711 "num_base_bdevs_operational": 3, 00:21:10.711 "base_bdevs_list": [ 00:21:10.711 { 00:21:10.711 "name": "pt1", 00:21:10.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.711 "is_configured": true, 00:21:10.711 "data_offset": 2048, 00:21:10.711 "data_size": 63488 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "name": "pt2", 00:21:10.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.711 "is_configured": true, 00:21:10.711 "data_offset": 2048, 00:21:10.711 "data_size": 63488 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "name": "pt3", 00:21:10.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.711 "is_configured": true, 00:21:10.711 "data_offset": 2048, 00:21:10.711 "data_size": 63488 00:21:10.711 } 00:21:10.711 ] 00:21:10.711 } 00:21:10.711 } 00:21:10.711 }' 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:10.711 pt2 00:21:10.711 pt3' 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.711 "name": "pt1", 00:21:10.711 "aliases": [ 00:21:10.711 "00000000-0000-0000-0000-000000000001" 00:21:10.711 ], 00:21:10.711 "product_name": "passthru", 00:21:10.711 "block_size": 512, 00:21:10.711 "num_blocks": 65536, 00:21:10.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.711 "assigned_rate_limits": { 00:21:10.711 "rw_ios_per_sec": 0, 00:21:10.711 "rw_mbytes_per_sec": 0, 00:21:10.711 "r_mbytes_per_sec": 0, 00:21:10.711 "w_mbytes_per_sec": 0 00:21:10.711 }, 00:21:10.711 "claimed": true, 00:21:10.711 "claim_type": "exclusive_write", 00:21:10.711 "zoned": false, 00:21:10.711 "supported_io_types": { 00:21:10.711 "read": true, 00:21:10.711 "write": true, 00:21:10.711 "unmap": true, 00:21:10.711 "flush": true, 00:21:10.711 "reset": true, 00:21:10.711 "nvme_admin": false, 00:21:10.711 "nvme_io": false, 00:21:10.711 "nvme_io_md": false, 00:21:10.711 "write_zeroes": true, 00:21:10.711 "zcopy": true, 00:21:10.711 "get_zone_info": false, 00:21:10.711 "zone_management": false, 00:21:10.711 "zone_append": false, 00:21:10.711 "compare": false, 00:21:10.711 "compare_and_write": false, 00:21:10.711 "abort": true, 00:21:10.711 "seek_hole": false, 00:21:10.711 "seek_data": false, 00:21:10.711 "copy": true, 00:21:10.711 "nvme_iov_md": false 00:21:10.711 }, 00:21:10.711 "memory_domains": [ 00:21:10.711 { 00:21:10.711 "dma_device_id": "system", 00:21:10.711 "dma_device_type": 1 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.711 "dma_device_type": 2 00:21:10.711 } 00:21:10.711 ], 00:21:10.711 "driver_specific": { 00:21:10.711 "passthru": { 00:21:10.711 "name": "pt1", 00:21:10.711 "base_bdev_name": "malloc1" 00:21:10.711 } 00:21:10.711 } 00:21:10.711 }' 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.711 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:10.992 09:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.992 "name": "pt2", 00:21:10.992 "aliases": [ 00:21:10.992 "00000000-0000-0000-0000-000000000002" 00:21:10.992 ], 00:21:10.992 "product_name": "passthru", 00:21:10.992 "block_size": 512, 00:21:10.992 "num_blocks": 65536, 00:21:10.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.992 "assigned_rate_limits": { 00:21:10.992 "rw_ios_per_sec": 0, 00:21:10.992 "rw_mbytes_per_sec": 0, 00:21:10.992 "r_mbytes_per_sec": 0, 00:21:10.992 "w_mbytes_per_sec": 0 00:21:10.992 }, 00:21:10.992 "claimed": true, 00:21:10.992 "claim_type": "exclusive_write", 00:21:10.992 "zoned": false, 00:21:10.992 "supported_io_types": { 00:21:10.992 "read": true, 00:21:10.992 "write": true, 00:21:10.992 "unmap": true, 00:21:10.992 "flush": true, 00:21:10.992 "reset": true, 00:21:10.992 "nvme_admin": false, 00:21:10.992 "nvme_io": false, 00:21:10.992 "nvme_io_md": false, 00:21:10.992 "write_zeroes": true, 00:21:10.992 "zcopy": true, 00:21:10.992 "get_zone_info": false, 00:21:10.992 "zone_management": false, 00:21:10.992 "zone_append": false, 00:21:10.992 "compare": false, 00:21:10.992 "compare_and_write": false, 00:21:10.992 "abort": true, 00:21:10.992 "seek_hole": false, 00:21:10.992 "seek_data": false, 00:21:10.992 "copy": true, 00:21:10.992 "nvme_iov_md": false 00:21:10.992 }, 00:21:10.992 "memory_domains": [ 00:21:10.992 { 00:21:10.992 "dma_device_id": "system", 00:21:10.992 "dma_device_type": 1 00:21:10.992 }, 00:21:10.992 { 00:21:10.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.992 "dma_device_type": 2 00:21:10.992 } 00:21:10.992 ], 00:21:10.992 "driver_specific": { 00:21:10.992 "passthru": { 00:21:10.992 "name": "pt2", 00:21:10.992 "base_bdev_name": "malloc2" 00:21:10.992 } 00:21:10.992 } 00:21:10.992 }' 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.992 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.255 "name": "pt3", 00:21:11.255 "aliases": [ 00:21:11.255 "00000000-0000-0000-0000-000000000003" 00:21:11.255 ], 00:21:11.255 "product_name": "passthru", 00:21:11.255 "block_size": 512, 00:21:11.255 "num_blocks": 65536, 00:21:11.255 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:11.255 "assigned_rate_limits": { 00:21:11.255 "rw_ios_per_sec": 0, 00:21:11.255 "rw_mbytes_per_sec": 0, 00:21:11.255 "r_mbytes_per_sec": 0, 00:21:11.255 "w_mbytes_per_sec": 0 00:21:11.255 }, 00:21:11.255 "claimed": true, 00:21:11.255 "claim_type": "exclusive_write", 00:21:11.255 "zoned": false, 00:21:11.255 "supported_io_types": { 00:21:11.255 "read": true, 00:21:11.255 "write": true, 00:21:11.255 "unmap": true, 00:21:11.255 "flush": true, 00:21:11.255 "reset": true, 00:21:11.255 "nvme_admin": false, 00:21:11.255 "nvme_io": false, 00:21:11.255 "nvme_io_md": false, 00:21:11.255 "write_zeroes": true, 00:21:11.255 "zcopy": true, 00:21:11.255 "get_zone_info": false, 00:21:11.255 "zone_management": false, 00:21:11.255 "zone_append": false, 00:21:11.255 "compare": false, 00:21:11.255 "compare_and_write": false, 00:21:11.255 "abort": true, 00:21:11.255 "seek_hole": false, 00:21:11.255 "seek_data": false, 00:21:11.255 "copy": true, 00:21:11.255 "nvme_iov_md": false 00:21:11.255 }, 00:21:11.255 "memory_domains": [ 00:21:11.255 { 00:21:11.255 "dma_device_id": "system", 00:21:11.255 "dma_device_type": 1 00:21:11.255 }, 00:21:11.255 { 00:21:11.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.255 "dma_device_type": 2 00:21:11.255 } 00:21:11.255 ], 00:21:11.255 "driver_specific": { 00:21:11.255 "passthru": { 00:21:11.255 "name": "pt3", 00:21:11.255 "base_bdev_name": "malloc3" 00:21:11.255 } 00:21:11.255 } 00:21:11.255 }' 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.255 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.521 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:11.521 [2024-07-15 09:47:39.613962] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.790 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4510c456-428f-11ef-a0af-c98d8ee52a94 00:21:11.790 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4510c456-428f-11ef-a0af-c98d8ee52a94 ']' 00:21:11.790 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:11.790 [2024-07-15 09:47:39.829940] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:11.790 [2024-07-15 09:47:39.829965] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.790 [2024-07-15 09:47:39.829984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.790 [2024-07-15 09:47:39.830000] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.790 [2024-07-15 09:47:39.830004] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15aef4635400 name raid_bdev1, state offline 00:21:11.790 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.790 09:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:12.062 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:12.062 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:12.062 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:12.062 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:12.335 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:12.335 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:12.610 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:12.610 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:12.610 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:12.610 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:12.886 09:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:13.150 [2024-07-15 09:47:41.094017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:13.150 [2024-07-15 09:47:41.094721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:13.150 [2024-07-15 09:47:41.094741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:13.150 [2024-07-15 09:47:41.094756] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:13.150 [2024-07-15 09:47:41.094798] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:13.150 [2024-07-15 09:47:41.094807] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:13.150 [2024-07-15 09:47:41.094815] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:13.150 [2024-07-15 09:47:41.094819] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15aef4635180 name raid_bdev1, state configuring 00:21:13.150 request: 00:21:13.150 { 00:21:13.150 "name": "raid_bdev1", 00:21:13.150 "raid_level": "raid0", 00:21:13.150 "base_bdevs": [ 00:21:13.150 "malloc1", 00:21:13.150 "malloc2", 00:21:13.150 "malloc3" 00:21:13.150 ], 00:21:13.150 "strip_size_kb": 64, 00:21:13.150 "superblock": false, 00:21:13.150 "method": "bdev_raid_create", 00:21:13.150 "req_id": 1 00:21:13.150 } 00:21:13.150 Got JSON-RPC error response 00:21:13.150 response: 00:21:13.150 { 00:21:13.150 "code": -17, 00:21:13.150 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:13.150 } 00:21:13.150 09:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:13.150 09:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.150 09:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.150 09:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.150 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.150 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:13.408 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:13.408 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:13.408 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:13.408 [2024-07-15 09:47:41.506036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:13.408 [2024-07-15 09:47:41.506091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.408 [2024-07-15 09:47:41.506099] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4634c80 00:21:13.408 [2024-07-15 09:47:41.506106] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.408 [2024-07-15 09:47:41.506822] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.408 [2024-07-15 09:47:41.506854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:13.408 [2024-07-15 09:47:41.506873] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:13.408 [2024-07-15 09:47:41.506884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:13.408 pt1 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.666 "name": "raid_bdev1", 00:21:13.666 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:13.666 "strip_size_kb": 64, 00:21:13.666 "state": "configuring", 00:21:13.666 "raid_level": "raid0", 00:21:13.666 "superblock": true, 00:21:13.666 "num_base_bdevs": 3, 00:21:13.666 "num_base_bdevs_discovered": 1, 00:21:13.666 "num_base_bdevs_operational": 3, 00:21:13.666 "base_bdevs_list": [ 00:21:13.666 { 00:21:13.666 "name": "pt1", 00:21:13.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:13.666 "is_configured": true, 00:21:13.666 "data_offset": 2048, 00:21:13.666 "data_size": 63488 00:21:13.666 }, 00:21:13.666 { 00:21:13.666 "name": null, 00:21:13.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:13.666 "is_configured": false, 00:21:13.666 "data_offset": 2048, 00:21:13.666 "data_size": 63488 00:21:13.666 }, 00:21:13.666 { 00:21:13.666 "name": null, 00:21:13.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.666 "is_configured": false, 00:21:13.666 "data_offset": 2048, 00:21:13.666 "data_size": 63488 00:21:13.666 } 00:21:13.666 ] 00:21:13.666 }' 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.666 09:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.923 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:13.924 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:14.180 [2024-07-15 09:47:42.194073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:14.180 [2024-07-15 09:47:42.194123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.180 [2024-07-15 09:47:42.194132] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4635680 00:21:14.180 [2024-07-15 09:47:42.194139] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.180 [2024-07-15 09:47:42.194232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.180 [2024-07-15 09:47:42.194239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:14.180 [2024-07-15 09:47:42.194253] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:14.180 [2024-07-15 09:47:42.194259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:14.180 pt2 00:21:14.180 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:14.437 [2024-07-15 09:47:42.386084] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.437 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.695 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.695 "name": "raid_bdev1", 00:21:14.695 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:14.695 "strip_size_kb": 64, 00:21:14.695 "state": "configuring", 00:21:14.695 "raid_level": "raid0", 00:21:14.695 "superblock": true, 00:21:14.695 "num_base_bdevs": 3, 00:21:14.695 "num_base_bdevs_discovered": 1, 00:21:14.695 "num_base_bdevs_operational": 3, 00:21:14.695 "base_bdevs_list": [ 00:21:14.695 { 00:21:14.695 "name": "pt1", 00:21:14.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:14.695 "is_configured": true, 00:21:14.695 "data_offset": 2048, 00:21:14.695 "data_size": 63488 00:21:14.695 }, 00:21:14.695 { 00:21:14.695 "name": null, 00:21:14.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.695 "is_configured": false, 00:21:14.695 "data_offset": 2048, 00:21:14.695 "data_size": 63488 00:21:14.695 }, 00:21:14.695 { 00:21:14.695 "name": null, 00:21:14.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:14.695 "is_configured": false, 00:21:14.695 "data_offset": 2048, 00:21:14.695 "data_size": 63488 00:21:14.695 } 00:21:14.695 ] 00:21:14.695 }' 00:21:14.695 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.695 09:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.952 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:14.952 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:14.952 09:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:15.210 [2024-07-15 09:47:43.102112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:15.210 [2024-07-15 09:47:43.102162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.210 [2024-07-15 09:47:43.102170] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4635680 00:21:15.210 [2024-07-15 09:47:43.102176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.210 [2024-07-15 09:47:43.102253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.210 [2024-07-15 09:47:43.102261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:15.210 [2024-07-15 09:47:43.102274] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:15.210 [2024-07-15 09:47:43.102280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:15.210 pt2 00:21:15.210 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:15.210 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:15.210 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:15.210 [2024-07-15 09:47:43.298128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:15.210 [2024-07-15 09:47:43.298175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.210 [2024-07-15 09:47:43.298182] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15aef4635400 00:21:15.210 [2024-07-15 09:47:43.298188] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.210 [2024-07-15 09:47:43.298254] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.210 [2024-07-15 09:47:43.298261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:15.210 [2024-07-15 09:47:43.298274] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:15.210 [2024-07-15 09:47:43.298280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:15.210 [2024-07-15 09:47:43.298299] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15aef4634780 00:21:15.210 [2024-07-15 09:47:43.298303] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:15.210 [2024-07-15 09:47:43.298319] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15aef4697e20 00:21:15.210 [2024-07-15 09:47:43.298362] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15aef4634780 00:21:15.210 [2024-07-15 09:47:43.298365] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15aef4634780 00:21:15.210 [2024-07-15 09:47:43.298380] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.210 pt3 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.468 "name": "raid_bdev1", 00:21:15.468 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:15.468 "strip_size_kb": 64, 00:21:15.468 "state": "online", 00:21:15.468 "raid_level": "raid0", 00:21:15.468 "superblock": true, 00:21:15.468 "num_base_bdevs": 3, 00:21:15.468 "num_base_bdevs_discovered": 3, 00:21:15.468 "num_base_bdevs_operational": 3, 00:21:15.468 "base_bdevs_list": [ 00:21:15.468 { 00:21:15.468 "name": "pt1", 00:21:15.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:15.468 "is_configured": true, 00:21:15.468 "data_offset": 2048, 00:21:15.468 "data_size": 63488 00:21:15.468 }, 00:21:15.468 { 00:21:15.468 "name": "pt2", 00:21:15.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.468 "is_configured": true, 00:21:15.468 "data_offset": 2048, 00:21:15.468 "data_size": 63488 00:21:15.468 }, 00:21:15.468 { 00:21:15.468 "name": "pt3", 00:21:15.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:15.468 "is_configured": true, 00:21:15.468 "data_offset": 2048, 00:21:15.468 "data_size": 63488 00:21:15.468 } 00:21:15.468 ] 00:21:15.468 }' 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.468 09:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:15.727 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:15.985 [2024-07-15 09:47:43.958198] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.985 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:15.986 "name": "raid_bdev1", 00:21:15.986 "aliases": [ 00:21:15.986 "4510c456-428f-11ef-a0af-c98d8ee52a94" 00:21:15.986 ], 00:21:15.986 "product_name": "Raid Volume", 00:21:15.986 "block_size": 512, 00:21:15.986 "num_blocks": 190464, 00:21:15.986 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:15.986 "assigned_rate_limits": { 00:21:15.986 "rw_ios_per_sec": 0, 00:21:15.986 "rw_mbytes_per_sec": 0, 00:21:15.986 "r_mbytes_per_sec": 0, 00:21:15.986 "w_mbytes_per_sec": 0 00:21:15.986 }, 00:21:15.986 "claimed": false, 00:21:15.986 "zoned": false, 00:21:15.986 "supported_io_types": { 00:21:15.986 "read": true, 00:21:15.986 "write": true, 00:21:15.986 "unmap": true, 00:21:15.986 "flush": true, 00:21:15.986 "reset": true, 00:21:15.986 "nvme_admin": false, 00:21:15.986 "nvme_io": false, 00:21:15.986 "nvme_io_md": false, 00:21:15.986 "write_zeroes": true, 00:21:15.986 "zcopy": false, 00:21:15.986 "get_zone_info": false, 00:21:15.986 "zone_management": false, 00:21:15.986 "zone_append": false, 00:21:15.986 "compare": false, 00:21:15.986 "compare_and_write": false, 00:21:15.986 "abort": false, 00:21:15.986 "seek_hole": false, 00:21:15.986 "seek_data": false, 00:21:15.986 "copy": false, 00:21:15.986 "nvme_iov_md": false 00:21:15.986 }, 00:21:15.986 "memory_domains": [ 00:21:15.986 { 00:21:15.986 "dma_device_id": "system", 00:21:15.986 "dma_device_type": 1 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.986 "dma_device_type": 2 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "dma_device_id": "system", 00:21:15.986 "dma_device_type": 1 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.986 "dma_device_type": 2 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "dma_device_id": "system", 00:21:15.986 "dma_device_type": 1 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.986 "dma_device_type": 2 00:21:15.986 } 00:21:15.986 ], 00:21:15.986 "driver_specific": { 00:21:15.986 "raid": { 00:21:15.986 "uuid": "4510c456-428f-11ef-a0af-c98d8ee52a94", 00:21:15.986 "strip_size_kb": 64, 00:21:15.986 "state": "online", 00:21:15.986 "raid_level": "raid0", 00:21:15.986 "superblock": true, 00:21:15.986 "num_base_bdevs": 3, 00:21:15.986 "num_base_bdevs_discovered": 3, 00:21:15.986 "num_base_bdevs_operational": 3, 00:21:15.986 "base_bdevs_list": [ 00:21:15.986 { 00:21:15.986 "name": "pt1", 00:21:15.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:15.986 "is_configured": true, 00:21:15.986 "data_offset": 2048, 00:21:15.986 "data_size": 63488 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "name": "pt2", 00:21:15.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.986 "is_configured": true, 00:21:15.986 "data_offset": 2048, 00:21:15.986 "data_size": 63488 00:21:15.986 }, 00:21:15.986 { 00:21:15.986 "name": "pt3", 00:21:15.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:15.986 "is_configured": true, 00:21:15.986 "data_offset": 2048, 00:21:15.986 "data_size": 63488 00:21:15.986 } 00:21:15.986 ] 00:21:15.986 } 00:21:15.986 } 00:21:15.986 }' 00:21:15.986 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.986 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:15.986 pt2 00:21:15.986 pt3' 00:21:15.986 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:15.986 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.986 09:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:16.245 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.245 "name": "pt1", 00:21:16.245 "aliases": [ 00:21:16.245 "00000000-0000-0000-0000-000000000001" 00:21:16.245 ], 00:21:16.245 "product_name": "passthru", 00:21:16.245 "block_size": 512, 00:21:16.245 "num_blocks": 65536, 00:21:16.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:16.245 "assigned_rate_limits": { 00:21:16.245 "rw_ios_per_sec": 0, 00:21:16.245 "rw_mbytes_per_sec": 0, 00:21:16.245 "r_mbytes_per_sec": 0, 00:21:16.245 "w_mbytes_per_sec": 0 00:21:16.245 }, 00:21:16.245 "claimed": true, 00:21:16.245 "claim_type": "exclusive_write", 00:21:16.245 "zoned": false, 00:21:16.245 "supported_io_types": { 00:21:16.245 "read": true, 00:21:16.245 "write": true, 00:21:16.245 "unmap": true, 00:21:16.245 "flush": true, 00:21:16.245 "reset": true, 00:21:16.245 "nvme_admin": false, 00:21:16.245 "nvme_io": false, 00:21:16.245 "nvme_io_md": false, 00:21:16.245 "write_zeroes": true, 00:21:16.245 "zcopy": true, 00:21:16.245 "get_zone_info": false, 00:21:16.245 "zone_management": false, 00:21:16.245 "zone_append": false, 00:21:16.245 "compare": false, 00:21:16.245 "compare_and_write": false, 00:21:16.245 "abort": true, 00:21:16.245 "seek_hole": false, 00:21:16.245 "seek_data": false, 00:21:16.245 "copy": true, 00:21:16.245 "nvme_iov_md": false 00:21:16.245 }, 00:21:16.245 "memory_domains": [ 00:21:16.245 { 00:21:16.245 "dma_device_id": "system", 00:21:16.245 "dma_device_type": 1 00:21:16.245 }, 00:21:16.245 { 00:21:16.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.245 "dma_device_type": 2 00:21:16.245 } 00:21:16.245 ], 00:21:16.245 "driver_specific": { 00:21:16.245 "passthru": { 00:21:16.245 "name": "pt1", 00:21:16.245 "base_bdev_name": "malloc1" 00:21:16.245 } 00:21:16.245 } 00:21:16.245 }' 00:21:16.245 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.245 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:16.246 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.504 "name": "pt2", 00:21:16.504 "aliases": [ 00:21:16.504 "00000000-0000-0000-0000-000000000002" 00:21:16.504 ], 00:21:16.504 "product_name": "passthru", 00:21:16.504 "block_size": 512, 00:21:16.504 "num_blocks": 65536, 00:21:16.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.504 "assigned_rate_limits": { 00:21:16.504 "rw_ios_per_sec": 0, 00:21:16.504 "rw_mbytes_per_sec": 0, 00:21:16.504 "r_mbytes_per_sec": 0, 00:21:16.504 "w_mbytes_per_sec": 0 00:21:16.504 }, 00:21:16.504 "claimed": true, 00:21:16.504 "claim_type": "exclusive_write", 00:21:16.504 "zoned": false, 00:21:16.504 "supported_io_types": { 00:21:16.504 "read": true, 00:21:16.504 "write": true, 00:21:16.504 "unmap": true, 00:21:16.504 "flush": true, 00:21:16.504 "reset": true, 00:21:16.504 "nvme_admin": false, 00:21:16.504 "nvme_io": false, 00:21:16.504 "nvme_io_md": false, 00:21:16.504 "write_zeroes": true, 00:21:16.504 "zcopy": true, 00:21:16.504 "get_zone_info": false, 00:21:16.504 "zone_management": false, 00:21:16.504 "zone_append": false, 00:21:16.504 "compare": false, 00:21:16.504 "compare_and_write": false, 00:21:16.504 "abort": true, 00:21:16.504 "seek_hole": false, 00:21:16.504 "seek_data": false, 00:21:16.504 "copy": true, 00:21:16.504 "nvme_iov_md": false 00:21:16.504 }, 00:21:16.504 "memory_domains": [ 00:21:16.504 { 00:21:16.504 "dma_device_id": "system", 00:21:16.504 "dma_device_type": 1 00:21:16.504 }, 00:21:16.504 { 00:21:16.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.504 "dma_device_type": 2 00:21:16.504 } 00:21:16.504 ], 00:21:16.504 "driver_specific": { 00:21:16.504 "passthru": { 00:21:16.504 "name": "pt2", 00:21:16.504 "base_bdev_name": "malloc2" 00:21:16.504 } 00:21:16.504 } 00:21:16.504 }' 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:16.504 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:17.071 "name": "pt3", 00:21:17.071 "aliases": [ 00:21:17.071 "00000000-0000-0000-0000-000000000003" 00:21:17.071 ], 00:21:17.071 "product_name": "passthru", 00:21:17.071 "block_size": 512, 00:21:17.071 "num_blocks": 65536, 00:21:17.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.071 "assigned_rate_limits": { 00:21:17.071 "rw_ios_per_sec": 0, 00:21:17.071 "rw_mbytes_per_sec": 0, 00:21:17.071 "r_mbytes_per_sec": 0, 00:21:17.071 "w_mbytes_per_sec": 0 00:21:17.071 }, 00:21:17.071 "claimed": true, 00:21:17.071 "claim_type": "exclusive_write", 00:21:17.071 "zoned": false, 00:21:17.071 "supported_io_types": { 00:21:17.071 "read": true, 00:21:17.071 "write": true, 00:21:17.071 "unmap": true, 00:21:17.071 "flush": true, 00:21:17.071 "reset": true, 00:21:17.071 "nvme_admin": false, 00:21:17.071 "nvme_io": false, 00:21:17.071 "nvme_io_md": false, 00:21:17.071 "write_zeroes": true, 00:21:17.071 "zcopy": true, 00:21:17.071 "get_zone_info": false, 00:21:17.071 "zone_management": false, 00:21:17.071 "zone_append": false, 00:21:17.071 "compare": false, 00:21:17.071 "compare_and_write": false, 00:21:17.071 "abort": true, 00:21:17.071 "seek_hole": false, 00:21:17.071 "seek_data": false, 00:21:17.071 "copy": true, 00:21:17.071 "nvme_iov_md": false 00:21:17.071 }, 00:21:17.071 "memory_domains": [ 00:21:17.071 { 00:21:17.071 "dma_device_id": "system", 00:21:17.071 "dma_device_type": 1 00:21:17.071 }, 00:21:17.071 { 00:21:17.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.071 "dma_device_type": 2 00:21:17.071 } 00:21:17.071 ], 00:21:17.071 "driver_specific": { 00:21:17.071 "passthru": { 00:21:17.071 "name": "pt3", 00:21:17.071 "base_bdev_name": "malloc3" 00:21:17.071 } 00:21:17.071 } 00:21:17.071 }' 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:17.071 09:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:17.071 [2024-07-15 09:47:45.142250] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4510c456-428f-11ef-a0af-c98d8ee52a94 '!=' 4510c456-428f-11ef-a0af-c98d8ee52a94 ']' 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53353 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53353 ']' 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53353 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53353 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:21:17.071 killing process with pid 53353 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53353' 00:21:17.071 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53353 00:21:17.071 [2024-07-15 09:47:45.174493] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:17.071 [2024-07-15 09:47:45.174510] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.071 [2024-07-15 09:47:45.174522] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.071 [2024-07-15 09:47:45.174526] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15aef4634780 name raid_bdev1, state offline 00:21:17.366 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53353 00:21:17.366 [2024-07-15 09:47:45.201199] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:17.366 09:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:17.366 00:21:17.366 real 0m10.030s 00:21:17.366 user 0m17.088s 00:21:17.366 sys 0m2.211s 00:21:17.366 ************************************ 00:21:17.366 END TEST raid_superblock_test 00:21:17.366 ************************************ 00:21:17.366 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.366 09:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.627 09:47:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:17.627 09:47:45 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:21:17.627 09:47:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:17.627 09:47:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.627 09:47:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:17.627 ************************************ 00:21:17.627 START TEST raid_read_error_test 00:21:17.627 ************************************ 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.mUClVED4yg 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53700 00:21:17.627 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53700 /var/tmp/spdk-raid.sock 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53700 ']' 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.628 09:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.628 [2024-07-15 09:47:45.539256] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:17.628 [2024-07-15 09:47:45.539522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:18.195 EAL: TSC is not safe to use in SMP mode 00:21:18.195 EAL: TSC is not invariant 00:21:18.195 [2024-07-15 09:47:46.275346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.454 [2024-07-15 09:47:46.393497] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:18.454 [2024-07-15 09:47:46.396092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.454 [2024-07-15 09:47:46.396842] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.454 [2024-07-15 09:47:46.396856] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.454 09:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.454 09:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:18.454 09:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:18.454 09:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:18.712 BaseBdev1_malloc 00:21:18.712 09:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:18.972 true 00:21:18.972 09:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:19.232 [2024-07-15 09:47:47.152407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:19.232 [2024-07-15 09:47:47.152481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.232 [2024-07-15 09:47:47.152515] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2036e2a34780 00:21:19.232 [2024-07-15 09:47:47.152523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.232 [2024-07-15 09:47:47.153318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.232 [2024-07-15 09:47:47.153348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:19.232 BaseBdev1 00:21:19.232 09:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:19.232 09:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:19.490 BaseBdev2_malloc 00:21:19.490 09:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:19.748 true 00:21:19.748 09:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:19.749 [2024-07-15 09:47:47.828442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:19.749 [2024-07-15 09:47:47.828517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.749 [2024-07-15 09:47:47.828552] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2036e2a34c80 00:21:19.749 [2024-07-15 09:47:47.828560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.749 [2024-07-15 09:47:47.829386] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.749 [2024-07-15 09:47:47.829431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:19.749 BaseBdev2 00:21:19.749 09:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:19.749 09:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:20.010 BaseBdev3_malloc 00:21:20.010 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:20.269 true 00:21:20.269 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:20.528 [2024-07-15 09:47:48.488476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:20.528 [2024-07-15 09:47:48.488544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.528 [2024-07-15 09:47:48.488582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2036e2a35180 00:21:20.528 [2024-07-15 09:47:48.488590] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.528 [2024-07-15 09:47:48.489373] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.528 [2024-07-15 09:47:48.489407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:20.528 BaseBdev3 00:21:20.528 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:20.786 [2024-07-15 09:47:48.688484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.786 [2024-07-15 09:47:48.689200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.786 [2024-07-15 09:47:48.689223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:20.786 [2024-07-15 09:47:48.689279] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2036e2a35400 00:21:20.786 [2024-07-15 09:47:48.689284] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:20.786 [2024-07-15 09:47:48.689324] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2036e2aa0e20 00:21:20.786 [2024-07-15 09:47:48.689396] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2036e2a35400 00:21:20.786 [2024-07-15 09:47:48.689400] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2036e2a35400 00:21:20.786 [2024-07-15 09:47:48.689425] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.786 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.044 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.044 "name": "raid_bdev1", 00:21:21.044 "uuid": "4b8a4790-428f-11ef-a0af-c98d8ee52a94", 00:21:21.044 "strip_size_kb": 64, 00:21:21.044 "state": "online", 00:21:21.044 "raid_level": "raid0", 00:21:21.044 "superblock": true, 00:21:21.044 "num_base_bdevs": 3, 00:21:21.044 "num_base_bdevs_discovered": 3, 00:21:21.044 "num_base_bdevs_operational": 3, 00:21:21.044 "base_bdevs_list": [ 00:21:21.044 { 00:21:21.044 "name": "BaseBdev1", 00:21:21.044 "uuid": "78895cb5-43f9-2f56-b322-7865978e2a78", 00:21:21.044 "is_configured": true, 00:21:21.044 "data_offset": 2048, 00:21:21.044 "data_size": 63488 00:21:21.044 }, 00:21:21.044 { 00:21:21.044 "name": "BaseBdev2", 00:21:21.044 "uuid": "09bfbb8d-8d20-865b-8421-4de61bb14753", 00:21:21.044 "is_configured": true, 00:21:21.044 "data_offset": 2048, 00:21:21.044 "data_size": 63488 00:21:21.044 }, 00:21:21.044 { 00:21:21.044 "name": "BaseBdev3", 00:21:21.044 "uuid": "36b45b35-cbca-d25b-88f9-fed1618f4fd6", 00:21:21.044 "is_configured": true, 00:21:21.044 "data_offset": 2048, 00:21:21.044 "data_size": 63488 00:21:21.044 } 00:21:21.044 ] 00:21:21.044 }' 00:21:21.044 09:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.044 09:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.303 09:47:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:21.303 09:47:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:21.303 [2024-07-15 09:47:49.336626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2036e2aa0ec0 00:21:22.239 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:22.498 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.499 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.758 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.758 "name": "raid_bdev1", 00:21:22.758 "uuid": "4b8a4790-428f-11ef-a0af-c98d8ee52a94", 00:21:22.758 "strip_size_kb": 64, 00:21:22.758 "state": "online", 00:21:22.758 "raid_level": "raid0", 00:21:22.758 "superblock": true, 00:21:22.758 "num_base_bdevs": 3, 00:21:22.758 "num_base_bdevs_discovered": 3, 00:21:22.758 "num_base_bdevs_operational": 3, 00:21:22.758 "base_bdevs_list": [ 00:21:22.758 { 00:21:22.758 "name": "BaseBdev1", 00:21:22.758 "uuid": "78895cb5-43f9-2f56-b322-7865978e2a78", 00:21:22.758 "is_configured": true, 00:21:22.758 "data_offset": 2048, 00:21:22.758 "data_size": 63488 00:21:22.758 }, 00:21:22.758 { 00:21:22.758 "name": "BaseBdev2", 00:21:22.758 "uuid": "09bfbb8d-8d20-865b-8421-4de61bb14753", 00:21:22.758 "is_configured": true, 00:21:22.758 "data_offset": 2048, 00:21:22.758 "data_size": 63488 00:21:22.758 }, 00:21:22.758 { 00:21:22.758 "name": "BaseBdev3", 00:21:22.758 "uuid": "36b45b35-cbca-d25b-88f9-fed1618f4fd6", 00:21:22.758 "is_configured": true, 00:21:22.758 "data_offset": 2048, 00:21:22.758 "data_size": 63488 00:21:22.758 } 00:21:22.758 ] 00:21:22.758 }' 00:21:22.758 09:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.758 09:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.016 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:23.275 [2024-07-15 09:47:51.334077] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.275 [2024-07-15 09:47:51.334116] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.275 [2024-07-15 09:47:51.334460] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.275 [2024-07-15 09:47:51.334471] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.275 [2024-07-15 09:47:51.334479] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.275 [2024-07-15 09:47:51.334484] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2036e2a35400 name raid_bdev1, state offline 00:21:23.275 0 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53700 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53700 ']' 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53700 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53700 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:21:23.275 killing process with pid 53700 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53700' 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53700 00:21:23.275 [2024-07-15 09:47:51.367550] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:23.275 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53700 00:21:23.533 [2024-07-15 09:47:51.393806] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.mUClVED4yg 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:21:23.792 00:21:23.792 real 0m6.162s 00:21:23.792 user 0m9.159s 00:21:23.792 sys 0m1.346s 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.792 09:47:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.792 ************************************ 00:21:23.792 END TEST raid_read_error_test 00:21:23.792 ************************************ 00:21:23.792 09:47:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:23.792 09:47:51 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:21:23.792 09:47:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:23.792 09:47:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.792 09:47:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:23.792 ************************************ 00:21:23.792 START TEST raid_write_error_test 00:21:23.792 ************************************ 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:23.792 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.DvRbo8Jofi 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53831 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53831 /var/tmp/spdk-raid.sock 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53831 ']' 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:23.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.793 09:47:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.793 [2024-07-15 09:47:51.752315] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:23.793 [2024-07-15 09:47:51.752575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:24.361 EAL: TSC is not safe to use in SMP mode 00:21:24.361 EAL: TSC is not invariant 00:21:24.620 [2024-07-15 09:47:52.465168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.620 [2024-07-15 09:47:52.578454] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:24.620 [2024-07-15 09:47:52.580860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.620 [2024-07-15 09:47:52.581607] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.620 [2024-07-15 09:47:52.581618] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.620 09:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.620 09:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:24.620 09:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.620 09:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:24.879 BaseBdev1_malloc 00:21:24.879 09:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:25.138 true 00:21:25.138 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:25.417 [2024-07-15 09:47:53.292438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:25.417 [2024-07-15 09:47:53.292512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.417 [2024-07-15 09:47:53.292543] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d772234780 00:21:25.417 [2024-07-15 09:47:53.292551] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.417 [2024-07-15 09:47:53.293306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.417 [2024-07-15 09:47:53.293341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:25.417 BaseBdev1 00:21:25.417 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.417 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:25.417 BaseBdev2_malloc 00:21:25.417 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:25.676 true 00:21:25.676 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:25.935 [2024-07-15 09:47:53.888482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:25.935 [2024-07-15 09:47:53.888557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.935 [2024-07-15 09:47:53.888593] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d772234c80 00:21:25.935 [2024-07-15 09:47:53.888600] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.935 [2024-07-15 09:47:53.889436] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.935 [2024-07-15 09:47:53.889484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.935 BaseBdev2 00:21:25.935 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.935 09:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:26.193 BaseBdev3_malloc 00:21:26.193 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:26.193 true 00:21:26.453 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:26.453 [2024-07-15 09:47:54.492513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:26.453 [2024-07-15 09:47:54.492578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.453 [2024-07-15 09:47:54.492612] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d772235180 00:21:26.453 [2024-07-15 09:47:54.492619] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.453 [2024-07-15 09:47:54.493383] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.453 [2024-07-15 09:47:54.493412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:26.453 BaseBdev3 00:21:26.453 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:26.712 [2024-07-15 09:47:54.704537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.712 [2024-07-15 09:47:54.705239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.712 [2024-07-15 09:47:54.705268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.712 [2024-07-15 09:47:54.705323] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18d772235400 00:21:26.712 [2024-07-15 09:47:54.705328] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:26.712 [2024-07-15 09:47:54.705373] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d7722a0e20 00:21:26.712 [2024-07-15 09:47:54.705448] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18d772235400 00:21:26.712 [2024-07-15 09:47:54.705451] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18d772235400 00:21:26.712 [2024-07-15 09:47:54.705474] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.712 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.971 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.971 "name": "raid_bdev1", 00:21:26.971 "uuid": "4f2041a0-428f-11ef-a0af-c98d8ee52a94", 00:21:26.971 "strip_size_kb": 64, 00:21:26.971 "state": "online", 00:21:26.971 "raid_level": "raid0", 00:21:26.971 "superblock": true, 00:21:26.971 "num_base_bdevs": 3, 00:21:26.971 "num_base_bdevs_discovered": 3, 00:21:26.971 "num_base_bdevs_operational": 3, 00:21:26.971 "base_bdevs_list": [ 00:21:26.971 { 00:21:26.971 "name": "BaseBdev1", 00:21:26.971 "uuid": "1314b8d4-7188-1b5f-89fd-f937e640852f", 00:21:26.971 "is_configured": true, 00:21:26.971 "data_offset": 2048, 00:21:26.971 "data_size": 63488 00:21:26.971 }, 00:21:26.971 { 00:21:26.971 "name": "BaseBdev2", 00:21:26.971 "uuid": "9191d3b1-44d6-195b-90b4-a54ab99f9fbb", 00:21:26.971 "is_configured": true, 00:21:26.971 "data_offset": 2048, 00:21:26.971 "data_size": 63488 00:21:26.971 }, 00:21:26.971 { 00:21:26.971 "name": "BaseBdev3", 00:21:26.971 "uuid": "51c308ad-e1f3-7f52-9dae-2ac9527c60c7", 00:21:26.971 "is_configured": true, 00:21:26.971 "data_offset": 2048, 00:21:26.971 "data_size": 63488 00:21:26.971 } 00:21:26.971 ] 00:21:26.971 }' 00:21:26.971 09:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.971 09:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.231 09:47:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:27.231 09:47:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:27.491 [2024-07-15 09:47:55.348678] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d7722a0ec0 00:21:28.429 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.688 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.688 "name": "raid_bdev1", 00:21:28.688 "uuid": "4f2041a0-428f-11ef-a0af-c98d8ee52a94", 00:21:28.688 "strip_size_kb": 64, 00:21:28.688 "state": "online", 00:21:28.688 "raid_level": "raid0", 00:21:28.688 "superblock": true, 00:21:28.688 "num_base_bdevs": 3, 00:21:28.688 "num_base_bdevs_discovered": 3, 00:21:28.688 "num_base_bdevs_operational": 3, 00:21:28.688 "base_bdevs_list": [ 00:21:28.688 { 00:21:28.689 "name": "BaseBdev1", 00:21:28.689 "uuid": "1314b8d4-7188-1b5f-89fd-f937e640852f", 00:21:28.689 "is_configured": true, 00:21:28.689 "data_offset": 2048, 00:21:28.689 "data_size": 63488 00:21:28.689 }, 00:21:28.689 { 00:21:28.689 "name": "BaseBdev2", 00:21:28.689 "uuid": "9191d3b1-44d6-195b-90b4-a54ab99f9fbb", 00:21:28.689 "is_configured": true, 00:21:28.689 "data_offset": 2048, 00:21:28.689 "data_size": 63488 00:21:28.689 }, 00:21:28.689 { 00:21:28.689 "name": "BaseBdev3", 00:21:28.689 "uuid": "51c308ad-e1f3-7f52-9dae-2ac9527c60c7", 00:21:28.689 "is_configured": true, 00:21:28.689 "data_offset": 2048, 00:21:28.689 "data_size": 63488 00:21:28.689 } 00:21:28.689 ] 00:21:28.689 }' 00:21:28.689 09:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.689 09:47:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.255 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:29.255 [2024-07-15 09:47:57.351762] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.255 [2024-07-15 09:47:57.351798] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.256 [2024-07-15 09:47:57.352150] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.256 [2024-07-15 09:47:57.352160] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.256 [2024-07-15 09:47:57.352168] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.256 [2024-07-15 09:47:57.352173] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18d772235400 name raid_bdev1, state offline 00:21:29.256 0 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53831 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53831 ']' 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53831 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53831 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:21:29.515 killing process with pid 53831 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53831' 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53831 00:21:29.515 [2024-07-15 09:47:57.384047] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.515 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53831 00:21:29.515 [2024-07-15 09:47:57.409900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.DvRbo8Jofi 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:21:29.776 00:21:29.776 real 0m5.943s 00:21:29.776 user 0m8.694s 00:21:29.776 sys 0m1.345s 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.776 ************************************ 00:21:29.776 END TEST raid_write_error_test 00:21:29.776 09:47:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.776 ************************************ 00:21:29.776 09:47:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:29.776 09:47:57 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:29.776 09:47:57 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:21:29.776 09:47:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:29.776 09:47:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.776 09:47:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.776 ************************************ 00:21:29.776 START TEST raid_state_function_test 00:21:29.776 ************************************ 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=53956 00:21:29.776 Process raid pid: 53956 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 53956' 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 53956 /var/tmp/spdk-raid.sock 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 53956 ']' 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.776 09:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.776 [2024-07-15 09:47:57.748540] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:29.776 [2024-07-15 09:47:57.748864] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:30.714 EAL: TSC is not safe to use in SMP mode 00:21:30.714 EAL: TSC is not invariant 00:21:30.714 [2024-07-15 09:47:58.460849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.714 [2024-07-15 09:47:58.573011] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:30.714 [2024-07-15 09:47:58.575551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.714 [2024-07-15 09:47:58.576293] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.714 [2024-07-15 09:47:58.576305] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.714 09:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.714 09:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:21:30.714 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:30.971 [2024-07-15 09:47:58.863141] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.971 [2024-07-15 09:47:58.863230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.971 [2024-07-15 09:47:58.863240] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.971 [2024-07-15 09:47:58.863255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.971 [2024-07-15 09:47:58.863263] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:30.971 [2024-07-15 09:47:58.863278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.971 09:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.230 09:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.230 "name": "Existed_Raid", 00:21:31.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.230 "strip_size_kb": 64, 00:21:31.230 "state": "configuring", 00:21:31.230 "raid_level": "concat", 00:21:31.230 "superblock": false, 00:21:31.230 "num_base_bdevs": 3, 00:21:31.230 "num_base_bdevs_discovered": 0, 00:21:31.230 "num_base_bdevs_operational": 3, 00:21:31.230 "base_bdevs_list": [ 00:21:31.230 { 00:21:31.230 "name": "BaseBdev1", 00:21:31.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.230 "is_configured": false, 00:21:31.230 "data_offset": 0, 00:21:31.230 "data_size": 0 00:21:31.230 }, 00:21:31.230 { 00:21:31.230 "name": "BaseBdev2", 00:21:31.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.230 "is_configured": false, 00:21:31.230 "data_offset": 0, 00:21:31.230 "data_size": 0 00:21:31.230 }, 00:21:31.230 { 00:21:31.230 "name": "BaseBdev3", 00:21:31.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.230 "is_configured": false, 00:21:31.230 "data_offset": 0, 00:21:31.230 "data_size": 0 00:21:31.230 } 00:21:31.230 ] 00:21:31.230 }' 00:21:31.230 09:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.230 09:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.489 09:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:31.747 [2024-07-15 09:47:59.635125] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:31.747 [2024-07-15 09:47:59.635156] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b5f00034500 name Existed_Raid, state configuring 00:21:31.747 09:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:31.747 [2024-07-15 09:47:59.843146] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:31.747 [2024-07-15 09:47:59.843204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:31.747 [2024-07-15 09:47:59.843208] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.747 [2024-07-15 09:47:59.843214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.747 [2024-07-15 09:47:59.843217] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:31.747 [2024-07-15 09:47:59.843224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:32.005 09:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:32.005 [2024-07-15 09:48:00.068322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.005 BaseBdev1 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:32.005 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.295 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:32.554 [ 00:21:32.554 { 00:21:32.554 "name": "BaseBdev1", 00:21:32.554 "aliases": [ 00:21:32.554 "5252884a-428f-11ef-a0af-c98d8ee52a94" 00:21:32.554 ], 00:21:32.554 "product_name": "Malloc disk", 00:21:32.554 "block_size": 512, 00:21:32.554 "num_blocks": 65536, 00:21:32.554 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:32.554 "assigned_rate_limits": { 00:21:32.554 "rw_ios_per_sec": 0, 00:21:32.554 "rw_mbytes_per_sec": 0, 00:21:32.554 "r_mbytes_per_sec": 0, 00:21:32.554 "w_mbytes_per_sec": 0 00:21:32.554 }, 00:21:32.554 "claimed": true, 00:21:32.554 "claim_type": "exclusive_write", 00:21:32.554 "zoned": false, 00:21:32.554 "supported_io_types": { 00:21:32.554 "read": true, 00:21:32.554 "write": true, 00:21:32.554 "unmap": true, 00:21:32.554 "flush": true, 00:21:32.554 "reset": true, 00:21:32.554 "nvme_admin": false, 00:21:32.554 "nvme_io": false, 00:21:32.554 "nvme_io_md": false, 00:21:32.554 "write_zeroes": true, 00:21:32.554 "zcopy": true, 00:21:32.554 "get_zone_info": false, 00:21:32.554 "zone_management": false, 00:21:32.554 "zone_append": false, 00:21:32.554 "compare": false, 00:21:32.554 "compare_and_write": false, 00:21:32.554 "abort": true, 00:21:32.554 "seek_hole": false, 00:21:32.554 "seek_data": false, 00:21:32.554 "copy": true, 00:21:32.554 "nvme_iov_md": false 00:21:32.554 }, 00:21:32.554 "memory_domains": [ 00:21:32.554 { 00:21:32.554 "dma_device_id": "system", 00:21:32.554 "dma_device_type": 1 00:21:32.554 }, 00:21:32.554 { 00:21:32.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.554 "dma_device_type": 2 00:21:32.554 } 00:21:32.554 ], 00:21:32.554 "driver_specific": {} 00:21:32.554 } 00:21:32.554 ] 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.554 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.812 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.812 "name": "Existed_Raid", 00:21:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.812 "strip_size_kb": 64, 00:21:32.812 "state": "configuring", 00:21:32.812 "raid_level": "concat", 00:21:32.812 "superblock": false, 00:21:32.812 "num_base_bdevs": 3, 00:21:32.812 "num_base_bdevs_discovered": 1, 00:21:32.812 "num_base_bdevs_operational": 3, 00:21:32.812 "base_bdevs_list": [ 00:21:32.812 { 00:21:32.812 "name": "BaseBdev1", 00:21:32.812 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:32.812 "is_configured": true, 00:21:32.812 "data_offset": 0, 00:21:32.812 "data_size": 65536 00:21:32.812 }, 00:21:32.812 { 00:21:32.812 "name": "BaseBdev2", 00:21:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.812 "is_configured": false, 00:21:32.812 "data_offset": 0, 00:21:32.812 "data_size": 0 00:21:32.812 }, 00:21:32.812 { 00:21:32.812 "name": "BaseBdev3", 00:21:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.812 "is_configured": false, 00:21:32.812 "data_offset": 0, 00:21:32.812 "data_size": 0 00:21:32.812 } 00:21:32.812 ] 00:21:32.813 }' 00:21:32.813 09:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.813 09:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.070 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:33.328 [2024-07-15 09:48:01.291207] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:33.328 [2024-07-15 09:48:01.291234] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b5f00034500 name Existed_Raid, state configuring 00:21:33.328 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:33.586 [2024-07-15 09:48:01.499233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.586 [2024-07-15 09:48:01.500086] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:33.586 [2024-07-15 09:48:01.500134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:33.586 [2024-07-15 09:48:01.500138] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:33.586 [2024-07-15 09:48:01.500144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.586 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.845 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.845 "name": "Existed_Raid", 00:21:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.845 "strip_size_kb": 64, 00:21:33.845 "state": "configuring", 00:21:33.845 "raid_level": "concat", 00:21:33.845 "superblock": false, 00:21:33.845 "num_base_bdevs": 3, 00:21:33.845 "num_base_bdevs_discovered": 1, 00:21:33.845 "num_base_bdevs_operational": 3, 00:21:33.845 "base_bdevs_list": [ 00:21:33.845 { 00:21:33.845 "name": "BaseBdev1", 00:21:33.845 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:33.845 "is_configured": true, 00:21:33.845 "data_offset": 0, 00:21:33.845 "data_size": 65536 00:21:33.845 }, 00:21:33.845 { 00:21:33.845 "name": "BaseBdev2", 00:21:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.845 "is_configured": false, 00:21:33.845 "data_offset": 0, 00:21:33.845 "data_size": 0 00:21:33.845 }, 00:21:33.845 { 00:21:33.845 "name": "BaseBdev3", 00:21:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.845 "is_configured": false, 00:21:33.845 "data_offset": 0, 00:21:33.845 "data_size": 0 00:21:33.845 } 00:21:33.845 ] 00:21:33.845 }' 00:21:33.845 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.845 09:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.104 09:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:34.104 [2024-07-15 09:48:02.171392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.104 BaseBdev2 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:34.104 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.363 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:34.622 [ 00:21:34.622 { 00:21:34.622 "name": "BaseBdev2", 00:21:34.622 "aliases": [ 00:21:34.622 "53939723-428f-11ef-a0af-c98d8ee52a94" 00:21:34.622 ], 00:21:34.622 "product_name": "Malloc disk", 00:21:34.622 "block_size": 512, 00:21:34.622 "num_blocks": 65536, 00:21:34.622 "uuid": "53939723-428f-11ef-a0af-c98d8ee52a94", 00:21:34.622 "assigned_rate_limits": { 00:21:34.622 "rw_ios_per_sec": 0, 00:21:34.622 "rw_mbytes_per_sec": 0, 00:21:34.622 "r_mbytes_per_sec": 0, 00:21:34.622 "w_mbytes_per_sec": 0 00:21:34.622 }, 00:21:34.622 "claimed": true, 00:21:34.622 "claim_type": "exclusive_write", 00:21:34.622 "zoned": false, 00:21:34.622 "supported_io_types": { 00:21:34.622 "read": true, 00:21:34.622 "write": true, 00:21:34.622 "unmap": true, 00:21:34.622 "flush": true, 00:21:34.622 "reset": true, 00:21:34.622 "nvme_admin": false, 00:21:34.622 "nvme_io": false, 00:21:34.622 "nvme_io_md": false, 00:21:34.622 "write_zeroes": true, 00:21:34.622 "zcopy": true, 00:21:34.623 "get_zone_info": false, 00:21:34.623 "zone_management": false, 00:21:34.623 "zone_append": false, 00:21:34.623 "compare": false, 00:21:34.623 "compare_and_write": false, 00:21:34.623 "abort": true, 00:21:34.623 "seek_hole": false, 00:21:34.623 "seek_data": false, 00:21:34.623 "copy": true, 00:21:34.623 "nvme_iov_md": false 00:21:34.623 }, 00:21:34.623 "memory_domains": [ 00:21:34.623 { 00:21:34.623 "dma_device_id": "system", 00:21:34.623 "dma_device_type": 1 00:21:34.623 }, 00:21:34.623 { 00:21:34.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.623 "dma_device_type": 2 00:21:34.623 } 00:21:34.623 ], 00:21:34.623 "driver_specific": {} 00:21:34.623 } 00:21:34.623 ] 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.623 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.881 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.881 "name": "Existed_Raid", 00:21:34.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.881 "strip_size_kb": 64, 00:21:34.881 "state": "configuring", 00:21:34.881 "raid_level": "concat", 00:21:34.881 "superblock": false, 00:21:34.881 "num_base_bdevs": 3, 00:21:34.881 "num_base_bdevs_discovered": 2, 00:21:34.881 "num_base_bdevs_operational": 3, 00:21:34.881 "base_bdevs_list": [ 00:21:34.881 { 00:21:34.881 "name": "BaseBdev1", 00:21:34.881 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:34.881 "is_configured": true, 00:21:34.881 "data_offset": 0, 00:21:34.881 "data_size": 65536 00:21:34.881 }, 00:21:34.881 { 00:21:34.881 "name": "BaseBdev2", 00:21:34.881 "uuid": "53939723-428f-11ef-a0af-c98d8ee52a94", 00:21:34.881 "is_configured": true, 00:21:34.881 "data_offset": 0, 00:21:34.881 "data_size": 65536 00:21:34.881 }, 00:21:34.881 { 00:21:34.881 "name": "BaseBdev3", 00:21:34.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.881 "is_configured": false, 00:21:34.882 "data_offset": 0, 00:21:34.882 "data_size": 0 00:21:34.882 } 00:21:34.882 ] 00:21:34.882 }' 00:21:34.882 09:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.882 09:48:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.446 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:35.704 [2024-07-15 09:48:03.699556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.704 [2024-07-15 09:48:03.699589] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b5f00034a00 00:21:35.704 [2024-07-15 09:48:03.699594] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:35.704 [2024-07-15 09:48:03.699614] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b5f00097e20 00:21:35.704 [2024-07-15 09:48:03.699718] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b5f00034a00 00:21:35.704 [2024-07-15 09:48:03.699721] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3b5f00034a00 00:21:35.704 [2024-07-15 09:48:03.699753] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.704 BaseBdev3 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.704 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.964 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:36.242 [ 00:21:36.242 { 00:21:36.242 "name": "BaseBdev3", 00:21:36.242 "aliases": [ 00:21:36.242 "547cc476-428f-11ef-a0af-c98d8ee52a94" 00:21:36.242 ], 00:21:36.242 "product_name": "Malloc disk", 00:21:36.242 "block_size": 512, 00:21:36.242 "num_blocks": 65536, 00:21:36.242 "uuid": "547cc476-428f-11ef-a0af-c98d8ee52a94", 00:21:36.242 "assigned_rate_limits": { 00:21:36.242 "rw_ios_per_sec": 0, 00:21:36.242 "rw_mbytes_per_sec": 0, 00:21:36.242 "r_mbytes_per_sec": 0, 00:21:36.242 "w_mbytes_per_sec": 0 00:21:36.242 }, 00:21:36.242 "claimed": true, 00:21:36.242 "claim_type": "exclusive_write", 00:21:36.242 "zoned": false, 00:21:36.242 "supported_io_types": { 00:21:36.242 "read": true, 00:21:36.242 "write": true, 00:21:36.242 "unmap": true, 00:21:36.242 "flush": true, 00:21:36.242 "reset": true, 00:21:36.242 "nvme_admin": false, 00:21:36.242 "nvme_io": false, 00:21:36.242 "nvme_io_md": false, 00:21:36.242 "write_zeroes": true, 00:21:36.242 "zcopy": true, 00:21:36.242 "get_zone_info": false, 00:21:36.242 "zone_management": false, 00:21:36.242 "zone_append": false, 00:21:36.242 "compare": false, 00:21:36.242 "compare_and_write": false, 00:21:36.242 "abort": true, 00:21:36.242 "seek_hole": false, 00:21:36.242 "seek_data": false, 00:21:36.242 "copy": true, 00:21:36.242 "nvme_iov_md": false 00:21:36.242 }, 00:21:36.242 "memory_domains": [ 00:21:36.242 { 00:21:36.242 "dma_device_id": "system", 00:21:36.242 "dma_device_type": 1 00:21:36.242 }, 00:21:36.242 { 00:21:36.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.242 "dma_device_type": 2 00:21:36.242 } 00:21:36.242 ], 00:21:36.242 "driver_specific": {} 00:21:36.242 } 00:21:36.242 ] 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.242 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.501 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.501 "name": "Existed_Raid", 00:21:36.501 "uuid": "547ccb96-428f-11ef-a0af-c98d8ee52a94", 00:21:36.501 "strip_size_kb": 64, 00:21:36.501 "state": "online", 00:21:36.501 "raid_level": "concat", 00:21:36.501 "superblock": false, 00:21:36.501 "num_base_bdevs": 3, 00:21:36.501 "num_base_bdevs_discovered": 3, 00:21:36.501 "num_base_bdevs_operational": 3, 00:21:36.501 "base_bdevs_list": [ 00:21:36.501 { 00:21:36.501 "name": "BaseBdev1", 00:21:36.501 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:36.501 "is_configured": true, 00:21:36.501 "data_offset": 0, 00:21:36.501 "data_size": 65536 00:21:36.501 }, 00:21:36.501 { 00:21:36.501 "name": "BaseBdev2", 00:21:36.501 "uuid": "53939723-428f-11ef-a0af-c98d8ee52a94", 00:21:36.501 "is_configured": true, 00:21:36.501 "data_offset": 0, 00:21:36.501 "data_size": 65536 00:21:36.501 }, 00:21:36.501 { 00:21:36.501 "name": "BaseBdev3", 00:21:36.501 "uuid": "547cc476-428f-11ef-a0af-c98d8ee52a94", 00:21:36.501 "is_configured": true, 00:21:36.501 "data_offset": 0, 00:21:36.501 "data_size": 65536 00:21:36.501 } 00:21:36.501 ] 00:21:36.501 }' 00:21:36.501 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.501 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:36.759 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:37.016 [2024-07-15 09:48:04.943452] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.017 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:37.017 "name": "Existed_Raid", 00:21:37.017 "aliases": [ 00:21:37.017 "547ccb96-428f-11ef-a0af-c98d8ee52a94" 00:21:37.017 ], 00:21:37.017 "product_name": "Raid Volume", 00:21:37.017 "block_size": 512, 00:21:37.017 "num_blocks": 196608, 00:21:37.017 "uuid": "547ccb96-428f-11ef-a0af-c98d8ee52a94", 00:21:37.017 "assigned_rate_limits": { 00:21:37.017 "rw_ios_per_sec": 0, 00:21:37.017 "rw_mbytes_per_sec": 0, 00:21:37.017 "r_mbytes_per_sec": 0, 00:21:37.017 "w_mbytes_per_sec": 0 00:21:37.017 }, 00:21:37.017 "claimed": false, 00:21:37.017 "zoned": false, 00:21:37.017 "supported_io_types": { 00:21:37.017 "read": true, 00:21:37.017 "write": true, 00:21:37.017 "unmap": true, 00:21:37.017 "flush": true, 00:21:37.017 "reset": true, 00:21:37.017 "nvme_admin": false, 00:21:37.017 "nvme_io": false, 00:21:37.017 "nvme_io_md": false, 00:21:37.017 "write_zeroes": true, 00:21:37.017 "zcopy": false, 00:21:37.017 "get_zone_info": false, 00:21:37.017 "zone_management": false, 00:21:37.017 "zone_append": false, 00:21:37.017 "compare": false, 00:21:37.017 "compare_and_write": false, 00:21:37.017 "abort": false, 00:21:37.017 "seek_hole": false, 00:21:37.017 "seek_data": false, 00:21:37.017 "copy": false, 00:21:37.017 "nvme_iov_md": false 00:21:37.017 }, 00:21:37.017 "memory_domains": [ 00:21:37.017 { 00:21:37.017 "dma_device_id": "system", 00:21:37.017 "dma_device_type": 1 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.017 "dma_device_type": 2 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "dma_device_id": "system", 00:21:37.017 "dma_device_type": 1 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.017 "dma_device_type": 2 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "dma_device_id": "system", 00:21:37.017 "dma_device_type": 1 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.017 "dma_device_type": 2 00:21:37.017 } 00:21:37.017 ], 00:21:37.017 "driver_specific": { 00:21:37.017 "raid": { 00:21:37.017 "uuid": "547ccb96-428f-11ef-a0af-c98d8ee52a94", 00:21:37.017 "strip_size_kb": 64, 00:21:37.017 "state": "online", 00:21:37.017 "raid_level": "concat", 00:21:37.017 "superblock": false, 00:21:37.017 "num_base_bdevs": 3, 00:21:37.017 "num_base_bdevs_discovered": 3, 00:21:37.017 "num_base_bdevs_operational": 3, 00:21:37.017 "base_bdevs_list": [ 00:21:37.017 { 00:21:37.017 "name": "BaseBdev1", 00:21:37.017 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:37.017 "is_configured": true, 00:21:37.017 "data_offset": 0, 00:21:37.017 "data_size": 65536 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "name": "BaseBdev2", 00:21:37.017 "uuid": "53939723-428f-11ef-a0af-c98d8ee52a94", 00:21:37.017 "is_configured": true, 00:21:37.017 "data_offset": 0, 00:21:37.017 "data_size": 65536 00:21:37.017 }, 00:21:37.017 { 00:21:37.017 "name": "BaseBdev3", 00:21:37.017 "uuid": "547cc476-428f-11ef-a0af-c98d8ee52a94", 00:21:37.017 "is_configured": true, 00:21:37.017 "data_offset": 0, 00:21:37.017 "data_size": 65536 00:21:37.017 } 00:21:37.017 ] 00:21:37.017 } 00:21:37.017 } 00:21:37.017 }' 00:21:37.017 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:37.017 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:37.017 BaseBdev2 00:21:37.017 BaseBdev3' 00:21:37.017 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:37.017 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:37.017 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:37.275 "name": "BaseBdev1", 00:21:37.275 "aliases": [ 00:21:37.275 "5252884a-428f-11ef-a0af-c98d8ee52a94" 00:21:37.275 ], 00:21:37.275 "product_name": "Malloc disk", 00:21:37.275 "block_size": 512, 00:21:37.275 "num_blocks": 65536, 00:21:37.275 "uuid": "5252884a-428f-11ef-a0af-c98d8ee52a94", 00:21:37.275 "assigned_rate_limits": { 00:21:37.275 "rw_ios_per_sec": 0, 00:21:37.275 "rw_mbytes_per_sec": 0, 00:21:37.275 "r_mbytes_per_sec": 0, 00:21:37.275 "w_mbytes_per_sec": 0 00:21:37.275 }, 00:21:37.275 "claimed": true, 00:21:37.275 "claim_type": "exclusive_write", 00:21:37.275 "zoned": false, 00:21:37.275 "supported_io_types": { 00:21:37.275 "read": true, 00:21:37.275 "write": true, 00:21:37.275 "unmap": true, 00:21:37.275 "flush": true, 00:21:37.275 "reset": true, 00:21:37.275 "nvme_admin": false, 00:21:37.275 "nvme_io": false, 00:21:37.275 "nvme_io_md": false, 00:21:37.275 "write_zeroes": true, 00:21:37.275 "zcopy": true, 00:21:37.275 "get_zone_info": false, 00:21:37.275 "zone_management": false, 00:21:37.275 "zone_append": false, 00:21:37.275 "compare": false, 00:21:37.275 "compare_and_write": false, 00:21:37.275 "abort": true, 00:21:37.275 "seek_hole": false, 00:21:37.275 "seek_data": false, 00:21:37.275 "copy": true, 00:21:37.275 "nvme_iov_md": false 00:21:37.275 }, 00:21:37.275 "memory_domains": [ 00:21:37.275 { 00:21:37.275 "dma_device_id": "system", 00:21:37.275 "dma_device_type": 1 00:21:37.275 }, 00:21:37.275 { 00:21:37.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.275 "dma_device_type": 2 00:21:37.275 } 00:21:37.275 ], 00:21:37.275 "driver_specific": {} 00:21:37.275 }' 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:37.275 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:37.534 "name": "BaseBdev2", 00:21:37.534 "aliases": [ 00:21:37.534 "53939723-428f-11ef-a0af-c98d8ee52a94" 00:21:37.534 ], 00:21:37.534 "product_name": "Malloc disk", 00:21:37.534 "block_size": 512, 00:21:37.534 "num_blocks": 65536, 00:21:37.534 "uuid": "53939723-428f-11ef-a0af-c98d8ee52a94", 00:21:37.534 "assigned_rate_limits": { 00:21:37.534 "rw_ios_per_sec": 0, 00:21:37.534 "rw_mbytes_per_sec": 0, 00:21:37.534 "r_mbytes_per_sec": 0, 00:21:37.534 "w_mbytes_per_sec": 0 00:21:37.534 }, 00:21:37.534 "claimed": true, 00:21:37.534 "claim_type": "exclusive_write", 00:21:37.534 "zoned": false, 00:21:37.534 "supported_io_types": { 00:21:37.534 "read": true, 00:21:37.534 "write": true, 00:21:37.534 "unmap": true, 00:21:37.534 "flush": true, 00:21:37.534 "reset": true, 00:21:37.534 "nvme_admin": false, 00:21:37.534 "nvme_io": false, 00:21:37.534 "nvme_io_md": false, 00:21:37.534 "write_zeroes": true, 00:21:37.534 "zcopy": true, 00:21:37.534 "get_zone_info": false, 00:21:37.534 "zone_management": false, 00:21:37.534 "zone_append": false, 00:21:37.534 "compare": false, 00:21:37.534 "compare_and_write": false, 00:21:37.534 "abort": true, 00:21:37.534 "seek_hole": false, 00:21:37.534 "seek_data": false, 00:21:37.534 "copy": true, 00:21:37.534 "nvme_iov_md": false 00:21:37.534 }, 00:21:37.534 "memory_domains": [ 00:21:37.534 { 00:21:37.534 "dma_device_id": "system", 00:21:37.534 "dma_device_type": 1 00:21:37.534 }, 00:21:37.534 { 00:21:37.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.534 "dma_device_type": 2 00:21:37.534 } 00:21:37.534 ], 00:21:37.534 "driver_specific": {} 00:21:37.534 }' 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:37.534 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:37.793 "name": "BaseBdev3", 00:21:37.793 "aliases": [ 00:21:37.793 "547cc476-428f-11ef-a0af-c98d8ee52a94" 00:21:37.793 ], 00:21:37.793 "product_name": "Malloc disk", 00:21:37.793 "block_size": 512, 00:21:37.793 "num_blocks": 65536, 00:21:37.793 "uuid": "547cc476-428f-11ef-a0af-c98d8ee52a94", 00:21:37.793 "assigned_rate_limits": { 00:21:37.793 "rw_ios_per_sec": 0, 00:21:37.793 "rw_mbytes_per_sec": 0, 00:21:37.793 "r_mbytes_per_sec": 0, 00:21:37.793 "w_mbytes_per_sec": 0 00:21:37.793 }, 00:21:37.793 "claimed": true, 00:21:37.793 "claim_type": "exclusive_write", 00:21:37.793 "zoned": false, 00:21:37.793 "supported_io_types": { 00:21:37.793 "read": true, 00:21:37.793 "write": true, 00:21:37.793 "unmap": true, 00:21:37.793 "flush": true, 00:21:37.793 "reset": true, 00:21:37.793 "nvme_admin": false, 00:21:37.793 "nvme_io": false, 00:21:37.793 "nvme_io_md": false, 00:21:37.793 "write_zeroes": true, 00:21:37.793 "zcopy": true, 00:21:37.793 "get_zone_info": false, 00:21:37.793 "zone_management": false, 00:21:37.793 "zone_append": false, 00:21:37.793 "compare": false, 00:21:37.793 "compare_and_write": false, 00:21:37.793 "abort": true, 00:21:37.793 "seek_hole": false, 00:21:37.793 "seek_data": false, 00:21:37.793 "copy": true, 00:21:37.793 "nvme_iov_md": false 00:21:37.793 }, 00:21:37.793 "memory_domains": [ 00:21:37.793 { 00:21:37.793 "dma_device_id": "system", 00:21:37.793 "dma_device_type": 1 00:21:37.793 }, 00:21:37.793 { 00:21:37.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.793 "dma_device_type": 2 00:21:37.793 } 00:21:37.793 ], 00:21:37.793 "driver_specific": {} 00:21:37.793 }' 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:37.793 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:38.052 [2024-07-15 09:48:06.055483] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:38.052 [2024-07-15 09:48:06.055510] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:38.052 [2024-07-15 09:48:06.055528] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.052 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.311 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.311 "name": "Existed_Raid", 00:21:38.311 "uuid": "547ccb96-428f-11ef-a0af-c98d8ee52a94", 00:21:38.311 "strip_size_kb": 64, 00:21:38.311 "state": "offline", 00:21:38.311 "raid_level": "concat", 00:21:38.311 "superblock": false, 00:21:38.311 "num_base_bdevs": 3, 00:21:38.311 "num_base_bdevs_discovered": 2, 00:21:38.311 "num_base_bdevs_operational": 2, 00:21:38.311 "base_bdevs_list": [ 00:21:38.311 { 00:21:38.311 "name": null, 00:21:38.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.311 "is_configured": false, 00:21:38.311 "data_offset": 0, 00:21:38.311 "data_size": 65536 00:21:38.311 }, 00:21:38.311 { 00:21:38.311 "name": "BaseBdev2", 00:21:38.311 "uuid": "53939723-428f-11ef-a0af-c98d8ee52a94", 00:21:38.311 "is_configured": true, 00:21:38.311 "data_offset": 0, 00:21:38.311 "data_size": 65536 00:21:38.311 }, 00:21:38.311 { 00:21:38.311 "name": "BaseBdev3", 00:21:38.311 "uuid": "547cc476-428f-11ef-a0af-c98d8ee52a94", 00:21:38.311 "is_configured": true, 00:21:38.311 "data_offset": 0, 00:21:38.311 "data_size": 65536 00:21:38.311 } 00:21:38.311 ] 00:21:38.311 }' 00:21:38.311 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.311 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.569 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:38.569 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:38.569 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.569 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:38.838 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:38.838 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:38.838 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:39.097 [2024-07-15 09:48:06.964036] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:39.097 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:39.097 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:39.097 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.097 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:39.097 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:39.097 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:39.097 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:39.356 [2024-07-15 09:48:07.396457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:39.356 [2024-07-15 09:48:07.396488] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b5f00034a00 name Existed_Raid, state offline 00:21:39.356 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:39.356 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:39.356 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:39.356 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.614 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:39.614 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:39.614 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:39.614 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:39.614 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:39.614 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:39.873 BaseBdev2 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:39.873 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.132 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:40.391 [ 00:21:40.391 { 00:21:40.391 "name": "BaseBdev2", 00:21:40.391 "aliases": [ 00:21:40.391 "56f1ae8b-428f-11ef-a0af-c98d8ee52a94" 00:21:40.391 ], 00:21:40.391 "product_name": "Malloc disk", 00:21:40.391 "block_size": 512, 00:21:40.391 "num_blocks": 65536, 00:21:40.391 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:40.391 "assigned_rate_limits": { 00:21:40.391 "rw_ios_per_sec": 0, 00:21:40.391 "rw_mbytes_per_sec": 0, 00:21:40.391 "r_mbytes_per_sec": 0, 00:21:40.391 "w_mbytes_per_sec": 0 00:21:40.391 }, 00:21:40.391 "claimed": false, 00:21:40.391 "zoned": false, 00:21:40.391 "supported_io_types": { 00:21:40.391 "read": true, 00:21:40.391 "write": true, 00:21:40.391 "unmap": true, 00:21:40.391 "flush": true, 00:21:40.391 "reset": true, 00:21:40.391 "nvme_admin": false, 00:21:40.391 "nvme_io": false, 00:21:40.391 "nvme_io_md": false, 00:21:40.391 "write_zeroes": true, 00:21:40.391 "zcopy": true, 00:21:40.391 "get_zone_info": false, 00:21:40.391 "zone_management": false, 00:21:40.391 "zone_append": false, 00:21:40.391 "compare": false, 00:21:40.391 "compare_and_write": false, 00:21:40.391 "abort": true, 00:21:40.391 "seek_hole": false, 00:21:40.391 "seek_data": false, 00:21:40.391 "copy": true, 00:21:40.391 "nvme_iov_md": false 00:21:40.391 }, 00:21:40.391 "memory_domains": [ 00:21:40.391 { 00:21:40.391 "dma_device_id": "system", 00:21:40.391 "dma_device_type": 1 00:21:40.391 }, 00:21:40.391 { 00:21:40.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.391 "dma_device_type": 2 00:21:40.391 } 00:21:40.391 ], 00:21:40.391 "driver_specific": {} 00:21:40.391 } 00:21:40.391 ] 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:40.391 BaseBdev3 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:40.391 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.649 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:40.908 [ 00:21:40.908 { 00:21:40.908 "name": "BaseBdev3", 00:21:40.908 "aliases": [ 00:21:40.908 "575357da-428f-11ef-a0af-c98d8ee52a94" 00:21:40.908 ], 00:21:40.908 "product_name": "Malloc disk", 00:21:40.908 "block_size": 512, 00:21:40.908 "num_blocks": 65536, 00:21:40.908 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:40.908 "assigned_rate_limits": { 00:21:40.908 "rw_ios_per_sec": 0, 00:21:40.908 "rw_mbytes_per_sec": 0, 00:21:40.909 "r_mbytes_per_sec": 0, 00:21:40.909 "w_mbytes_per_sec": 0 00:21:40.909 }, 00:21:40.909 "claimed": false, 00:21:40.909 "zoned": false, 00:21:40.909 "supported_io_types": { 00:21:40.909 "read": true, 00:21:40.909 "write": true, 00:21:40.909 "unmap": true, 00:21:40.909 "flush": true, 00:21:40.909 "reset": true, 00:21:40.909 "nvme_admin": false, 00:21:40.909 "nvme_io": false, 00:21:40.909 "nvme_io_md": false, 00:21:40.909 "write_zeroes": true, 00:21:40.909 "zcopy": true, 00:21:40.909 "get_zone_info": false, 00:21:40.909 "zone_management": false, 00:21:40.909 "zone_append": false, 00:21:40.909 "compare": false, 00:21:40.909 "compare_and_write": false, 00:21:40.909 "abort": true, 00:21:40.909 "seek_hole": false, 00:21:40.909 "seek_data": false, 00:21:40.909 "copy": true, 00:21:40.909 "nvme_iov_md": false 00:21:40.909 }, 00:21:40.909 "memory_domains": [ 00:21:40.909 { 00:21:40.909 "dma_device_id": "system", 00:21:40.909 "dma_device_type": 1 00:21:40.909 }, 00:21:40.909 { 00:21:40.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.909 "dma_device_type": 2 00:21:40.909 } 00:21:40.909 ], 00:21:40.909 "driver_specific": {} 00:21:40.909 } 00:21:40.909 ] 00:21:40.909 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:40.909 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:40.909 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:40.909 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:41.168 [2024-07-15 09:48:09.089136] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.168 [2024-07-15 09:48:09.089200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.168 [2024-07-15 09:48:09.089207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.168 [2024-07-15 09:48:09.089797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.168 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.428 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.428 "name": "Existed_Raid", 00:21:41.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.428 "strip_size_kb": 64, 00:21:41.428 "state": "configuring", 00:21:41.428 "raid_level": "concat", 00:21:41.428 "superblock": false, 00:21:41.428 "num_base_bdevs": 3, 00:21:41.428 "num_base_bdevs_discovered": 2, 00:21:41.428 "num_base_bdevs_operational": 3, 00:21:41.428 "base_bdevs_list": [ 00:21:41.429 { 00:21:41.429 "name": "BaseBdev1", 00:21:41.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.429 "is_configured": false, 00:21:41.429 "data_offset": 0, 00:21:41.429 "data_size": 0 00:21:41.429 }, 00:21:41.429 { 00:21:41.429 "name": "BaseBdev2", 00:21:41.429 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:41.429 "is_configured": true, 00:21:41.429 "data_offset": 0, 00:21:41.429 "data_size": 65536 00:21:41.429 }, 00:21:41.429 { 00:21:41.429 "name": "BaseBdev3", 00:21:41.429 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:41.429 "is_configured": true, 00:21:41.429 "data_offset": 0, 00:21:41.429 "data_size": 65536 00:21:41.429 } 00:21:41.429 ] 00:21:41.429 }' 00:21:41.429 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.429 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.686 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:41.945 [2024-07-15 09:48:09.813164] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.946 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.946 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.946 "name": "Existed_Raid", 00:21:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.946 "strip_size_kb": 64, 00:21:41.946 "state": "configuring", 00:21:41.946 "raid_level": "concat", 00:21:41.946 "superblock": false, 00:21:41.946 "num_base_bdevs": 3, 00:21:41.946 "num_base_bdevs_discovered": 1, 00:21:41.946 "num_base_bdevs_operational": 3, 00:21:41.946 "base_bdevs_list": [ 00:21:41.946 { 00:21:41.946 "name": "BaseBdev1", 00:21:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.946 "is_configured": false, 00:21:41.946 "data_offset": 0, 00:21:41.946 "data_size": 0 00:21:41.946 }, 00:21:41.946 { 00:21:41.946 "name": null, 00:21:41.946 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:41.946 "is_configured": false, 00:21:41.946 "data_offset": 0, 00:21:41.946 "data_size": 65536 00:21:41.946 }, 00:21:41.946 { 00:21:41.946 "name": "BaseBdev3", 00:21:41.946 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:41.946 "is_configured": true, 00:21:41.946 "data_offset": 0, 00:21:41.946 "data_size": 65536 00:21:41.946 } 00:21:41.946 ] 00:21:41.946 }' 00:21:41.946 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.946 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.514 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.514 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:42.514 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:42.514 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.772 [2024-07-15 09:48:10.781360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.772 BaseBdev1 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:42.772 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.031 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:43.289 [ 00:21:43.289 { 00:21:43.289 "name": "BaseBdev1", 00:21:43.289 "aliases": [ 00:21:43.289 "58b55e64-428f-11ef-a0af-c98d8ee52a94" 00:21:43.289 ], 00:21:43.289 "product_name": "Malloc disk", 00:21:43.289 "block_size": 512, 00:21:43.289 "num_blocks": 65536, 00:21:43.289 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:43.289 "assigned_rate_limits": { 00:21:43.289 "rw_ios_per_sec": 0, 00:21:43.289 "rw_mbytes_per_sec": 0, 00:21:43.289 "r_mbytes_per_sec": 0, 00:21:43.289 "w_mbytes_per_sec": 0 00:21:43.289 }, 00:21:43.289 "claimed": true, 00:21:43.289 "claim_type": "exclusive_write", 00:21:43.289 "zoned": false, 00:21:43.289 "supported_io_types": { 00:21:43.289 "read": true, 00:21:43.289 "write": true, 00:21:43.289 "unmap": true, 00:21:43.289 "flush": true, 00:21:43.289 "reset": true, 00:21:43.289 "nvme_admin": false, 00:21:43.289 "nvme_io": false, 00:21:43.289 "nvme_io_md": false, 00:21:43.289 "write_zeroes": true, 00:21:43.289 "zcopy": true, 00:21:43.289 "get_zone_info": false, 00:21:43.289 "zone_management": false, 00:21:43.289 "zone_append": false, 00:21:43.289 "compare": false, 00:21:43.289 "compare_and_write": false, 00:21:43.289 "abort": true, 00:21:43.289 "seek_hole": false, 00:21:43.289 "seek_data": false, 00:21:43.289 "copy": true, 00:21:43.289 "nvme_iov_md": false 00:21:43.289 }, 00:21:43.289 "memory_domains": [ 00:21:43.289 { 00:21:43.289 "dma_device_id": "system", 00:21:43.289 "dma_device_type": 1 00:21:43.289 }, 00:21:43.289 { 00:21:43.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.289 "dma_device_type": 2 00:21:43.289 } 00:21:43.289 ], 00:21:43.289 "driver_specific": {} 00:21:43.289 } 00:21:43.289 ] 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.289 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.546 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.546 "name": "Existed_Raid", 00:21:43.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.546 "strip_size_kb": 64, 00:21:43.546 "state": "configuring", 00:21:43.546 "raid_level": "concat", 00:21:43.546 "superblock": false, 00:21:43.546 "num_base_bdevs": 3, 00:21:43.546 "num_base_bdevs_discovered": 2, 00:21:43.546 "num_base_bdevs_operational": 3, 00:21:43.546 "base_bdevs_list": [ 00:21:43.546 { 00:21:43.546 "name": "BaseBdev1", 00:21:43.546 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:43.546 "is_configured": true, 00:21:43.546 "data_offset": 0, 00:21:43.546 "data_size": 65536 00:21:43.547 }, 00:21:43.547 { 00:21:43.547 "name": null, 00:21:43.547 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:43.547 "is_configured": false, 00:21:43.547 "data_offset": 0, 00:21:43.547 "data_size": 65536 00:21:43.547 }, 00:21:43.547 { 00:21:43.547 "name": "BaseBdev3", 00:21:43.547 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:43.547 "is_configured": true, 00:21:43.547 "data_offset": 0, 00:21:43.547 "data_size": 65536 00:21:43.547 } 00:21:43.547 ] 00:21:43.547 }' 00:21:43.547 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.547 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.805 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.805 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:43.805 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:43.805 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:44.064 [2024-07-15 09:48:12.061304] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.064 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.322 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.322 "name": "Existed_Raid", 00:21:44.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.322 "strip_size_kb": 64, 00:21:44.322 "state": "configuring", 00:21:44.322 "raid_level": "concat", 00:21:44.322 "superblock": false, 00:21:44.322 "num_base_bdevs": 3, 00:21:44.322 "num_base_bdevs_discovered": 1, 00:21:44.322 "num_base_bdevs_operational": 3, 00:21:44.322 "base_bdevs_list": [ 00:21:44.322 { 00:21:44.322 "name": "BaseBdev1", 00:21:44.322 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:44.322 "is_configured": true, 00:21:44.322 "data_offset": 0, 00:21:44.322 "data_size": 65536 00:21:44.322 }, 00:21:44.322 { 00:21:44.322 "name": null, 00:21:44.322 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:44.322 "is_configured": false, 00:21:44.322 "data_offset": 0, 00:21:44.322 "data_size": 65536 00:21:44.322 }, 00:21:44.322 { 00:21:44.322 "name": null, 00:21:44.322 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:44.322 "is_configured": false, 00:21:44.322 "data_offset": 0, 00:21:44.322 "data_size": 65536 00:21:44.322 } 00:21:44.322 ] 00:21:44.322 }' 00:21:44.323 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.323 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.582 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.582 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:44.854 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:44.854 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:44.854 [2024-07-15 09:48:12.953351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.114 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.114 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.114 "name": "Existed_Raid", 00:21:45.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.114 "strip_size_kb": 64, 00:21:45.114 "state": "configuring", 00:21:45.114 "raid_level": "concat", 00:21:45.114 "superblock": false, 00:21:45.114 "num_base_bdevs": 3, 00:21:45.114 "num_base_bdevs_discovered": 2, 00:21:45.114 "num_base_bdevs_operational": 3, 00:21:45.114 "base_bdevs_list": [ 00:21:45.114 { 00:21:45.114 "name": "BaseBdev1", 00:21:45.114 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:45.114 "is_configured": true, 00:21:45.114 "data_offset": 0, 00:21:45.114 "data_size": 65536 00:21:45.114 }, 00:21:45.114 { 00:21:45.114 "name": null, 00:21:45.114 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:45.114 "is_configured": false, 00:21:45.114 "data_offset": 0, 00:21:45.114 "data_size": 65536 00:21:45.114 }, 00:21:45.114 { 00:21:45.114 "name": "BaseBdev3", 00:21:45.114 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:45.114 "is_configured": true, 00:21:45.114 "data_offset": 0, 00:21:45.114 "data_size": 65536 00:21:45.114 } 00:21:45.114 ] 00:21:45.114 }' 00:21:45.114 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.114 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.373 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.373 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:45.632 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:45.632 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:45.891 [2024-07-15 09:48:13.857423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.891 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.150 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.150 "name": "Existed_Raid", 00:21:46.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.150 "strip_size_kb": 64, 00:21:46.150 "state": "configuring", 00:21:46.150 "raid_level": "concat", 00:21:46.150 "superblock": false, 00:21:46.150 "num_base_bdevs": 3, 00:21:46.150 "num_base_bdevs_discovered": 1, 00:21:46.150 "num_base_bdevs_operational": 3, 00:21:46.150 "base_bdevs_list": [ 00:21:46.150 { 00:21:46.150 "name": null, 00:21:46.150 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:46.150 "is_configured": false, 00:21:46.150 "data_offset": 0, 00:21:46.150 "data_size": 65536 00:21:46.150 }, 00:21:46.150 { 00:21:46.150 "name": null, 00:21:46.150 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:46.150 "is_configured": false, 00:21:46.150 "data_offset": 0, 00:21:46.150 "data_size": 65536 00:21:46.150 }, 00:21:46.150 { 00:21:46.150 "name": "BaseBdev3", 00:21:46.150 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:46.150 "is_configured": true, 00:21:46.150 "data_offset": 0, 00:21:46.150 "data_size": 65536 00:21:46.150 } 00:21:46.150 ] 00:21:46.150 }' 00:21:46.150 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.150 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.409 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.409 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:46.668 [2024-07-15 09:48:14.745769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.668 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.927 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.927 "name": "Existed_Raid", 00:21:46.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.927 "strip_size_kb": 64, 00:21:46.927 "state": "configuring", 00:21:46.927 "raid_level": "concat", 00:21:46.927 "superblock": false, 00:21:46.927 "num_base_bdevs": 3, 00:21:46.927 "num_base_bdevs_discovered": 2, 00:21:46.927 "num_base_bdevs_operational": 3, 00:21:46.927 "base_bdevs_list": [ 00:21:46.927 { 00:21:46.927 "name": null, 00:21:46.927 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:46.927 "is_configured": false, 00:21:46.927 "data_offset": 0, 00:21:46.927 "data_size": 65536 00:21:46.927 }, 00:21:46.927 { 00:21:46.927 "name": "BaseBdev2", 00:21:46.927 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:46.927 "is_configured": true, 00:21:46.927 "data_offset": 0, 00:21:46.927 "data_size": 65536 00:21:46.927 }, 00:21:46.927 { 00:21:46.927 "name": "BaseBdev3", 00:21:46.927 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:46.927 "is_configured": true, 00:21:46.927 "data_offset": 0, 00:21:46.927 "data_size": 65536 00:21:46.927 } 00:21:46.927 ] 00:21:46.927 }' 00:21:46.927 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.927 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.186 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.186 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:47.445 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:47.445 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.445 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:47.704 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 58b55e64-428f-11ef-a0af-c98d8ee52a94 00:21:47.967 [2024-07-15 09:48:15.889993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:47.967 [2024-07-15 09:48:15.890019] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b5f00034a00 00:21:47.967 [2024-07-15 09:48:15.890023] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:47.967 [2024-07-15 09:48:15.890044] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b5f00097e20 00:21:47.967 [2024-07-15 09:48:15.890112] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b5f00034a00 00:21:47.967 [2024-07-15 09:48:15.890116] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3b5f00034a00 00:21:47.967 [2024-07-15 09:48:15.890144] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.967 NewBaseBdev 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:47.967 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:48.254 [ 00:21:48.254 { 00:21:48.254 "name": "NewBaseBdev", 00:21:48.254 "aliases": [ 00:21:48.254 "58b55e64-428f-11ef-a0af-c98d8ee52a94" 00:21:48.254 ], 00:21:48.254 "product_name": "Malloc disk", 00:21:48.254 "block_size": 512, 00:21:48.254 "num_blocks": 65536, 00:21:48.254 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:48.254 "assigned_rate_limits": { 00:21:48.254 "rw_ios_per_sec": 0, 00:21:48.254 "rw_mbytes_per_sec": 0, 00:21:48.254 "r_mbytes_per_sec": 0, 00:21:48.254 "w_mbytes_per_sec": 0 00:21:48.254 }, 00:21:48.254 "claimed": true, 00:21:48.254 "claim_type": "exclusive_write", 00:21:48.254 "zoned": false, 00:21:48.254 "supported_io_types": { 00:21:48.254 "read": true, 00:21:48.254 "write": true, 00:21:48.254 "unmap": true, 00:21:48.254 "flush": true, 00:21:48.254 "reset": true, 00:21:48.254 "nvme_admin": false, 00:21:48.254 "nvme_io": false, 00:21:48.254 "nvme_io_md": false, 00:21:48.254 "write_zeroes": true, 00:21:48.254 "zcopy": true, 00:21:48.254 "get_zone_info": false, 00:21:48.254 "zone_management": false, 00:21:48.254 "zone_append": false, 00:21:48.254 "compare": false, 00:21:48.254 "compare_and_write": false, 00:21:48.254 "abort": true, 00:21:48.254 "seek_hole": false, 00:21:48.254 "seek_data": false, 00:21:48.254 "copy": true, 00:21:48.254 "nvme_iov_md": false 00:21:48.254 }, 00:21:48.254 "memory_domains": [ 00:21:48.254 { 00:21:48.254 "dma_device_id": "system", 00:21:48.254 "dma_device_type": 1 00:21:48.254 }, 00:21:48.254 { 00:21:48.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.254 "dma_device_type": 2 00:21:48.254 } 00:21:48.254 ], 00:21:48.254 "driver_specific": {} 00:21:48.254 } 00:21:48.254 ] 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.254 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.514 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.514 "name": "Existed_Raid", 00:21:48.514 "uuid": "5bc0e841-428f-11ef-a0af-c98d8ee52a94", 00:21:48.514 "strip_size_kb": 64, 00:21:48.514 "state": "online", 00:21:48.514 "raid_level": "concat", 00:21:48.514 "superblock": false, 00:21:48.514 "num_base_bdevs": 3, 00:21:48.514 "num_base_bdevs_discovered": 3, 00:21:48.514 "num_base_bdevs_operational": 3, 00:21:48.514 "base_bdevs_list": [ 00:21:48.514 { 00:21:48.514 "name": "NewBaseBdev", 00:21:48.514 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:48.514 "is_configured": true, 00:21:48.514 "data_offset": 0, 00:21:48.514 "data_size": 65536 00:21:48.514 }, 00:21:48.514 { 00:21:48.514 "name": "BaseBdev2", 00:21:48.514 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:48.514 "is_configured": true, 00:21:48.514 "data_offset": 0, 00:21:48.514 "data_size": 65536 00:21:48.514 }, 00:21:48.514 { 00:21:48.514 "name": "BaseBdev3", 00:21:48.514 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:48.514 "is_configured": true, 00:21:48.514 "data_offset": 0, 00:21:48.514 "data_size": 65536 00:21:48.514 } 00:21:48.514 ] 00:21:48.514 }' 00:21:48.514 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.514 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:48.773 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:49.033 [2024-07-15 09:48:16.981951] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.033 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:49.033 "name": "Existed_Raid", 00:21:49.033 "aliases": [ 00:21:49.033 "5bc0e841-428f-11ef-a0af-c98d8ee52a94" 00:21:49.033 ], 00:21:49.033 "product_name": "Raid Volume", 00:21:49.033 "block_size": 512, 00:21:49.033 "num_blocks": 196608, 00:21:49.033 "uuid": "5bc0e841-428f-11ef-a0af-c98d8ee52a94", 00:21:49.033 "assigned_rate_limits": { 00:21:49.033 "rw_ios_per_sec": 0, 00:21:49.033 "rw_mbytes_per_sec": 0, 00:21:49.033 "r_mbytes_per_sec": 0, 00:21:49.033 "w_mbytes_per_sec": 0 00:21:49.033 }, 00:21:49.033 "claimed": false, 00:21:49.033 "zoned": false, 00:21:49.033 "supported_io_types": { 00:21:49.033 "read": true, 00:21:49.033 "write": true, 00:21:49.033 "unmap": true, 00:21:49.033 "flush": true, 00:21:49.033 "reset": true, 00:21:49.033 "nvme_admin": false, 00:21:49.033 "nvme_io": false, 00:21:49.033 "nvme_io_md": false, 00:21:49.033 "write_zeroes": true, 00:21:49.033 "zcopy": false, 00:21:49.033 "get_zone_info": false, 00:21:49.033 "zone_management": false, 00:21:49.033 "zone_append": false, 00:21:49.033 "compare": false, 00:21:49.033 "compare_and_write": false, 00:21:49.033 "abort": false, 00:21:49.033 "seek_hole": false, 00:21:49.033 "seek_data": false, 00:21:49.033 "copy": false, 00:21:49.033 "nvme_iov_md": false 00:21:49.033 }, 00:21:49.033 "memory_domains": [ 00:21:49.033 { 00:21:49.033 "dma_device_id": "system", 00:21:49.033 "dma_device_type": 1 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.033 "dma_device_type": 2 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "dma_device_id": "system", 00:21:49.033 "dma_device_type": 1 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.033 "dma_device_type": 2 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "dma_device_id": "system", 00:21:49.033 "dma_device_type": 1 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.033 "dma_device_type": 2 00:21:49.033 } 00:21:49.033 ], 00:21:49.033 "driver_specific": { 00:21:49.033 "raid": { 00:21:49.033 "uuid": "5bc0e841-428f-11ef-a0af-c98d8ee52a94", 00:21:49.033 "strip_size_kb": 64, 00:21:49.033 "state": "online", 00:21:49.033 "raid_level": "concat", 00:21:49.033 "superblock": false, 00:21:49.033 "num_base_bdevs": 3, 00:21:49.033 "num_base_bdevs_discovered": 3, 00:21:49.033 "num_base_bdevs_operational": 3, 00:21:49.033 "base_bdevs_list": [ 00:21:49.033 { 00:21:49.033 "name": "NewBaseBdev", 00:21:49.033 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:49.033 "is_configured": true, 00:21:49.033 "data_offset": 0, 00:21:49.033 "data_size": 65536 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "name": "BaseBdev2", 00:21:49.033 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:49.033 "is_configured": true, 00:21:49.033 "data_offset": 0, 00:21:49.033 "data_size": 65536 00:21:49.033 }, 00:21:49.033 { 00:21:49.033 "name": "BaseBdev3", 00:21:49.033 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:49.033 "is_configured": true, 00:21:49.033 "data_offset": 0, 00:21:49.033 "data_size": 65536 00:21:49.033 } 00:21:49.033 ] 00:21:49.033 } 00:21:49.033 } 00:21:49.033 }' 00:21:49.033 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:49.034 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:49.034 BaseBdev2 00:21:49.034 BaseBdev3' 00:21:49.034 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.034 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:49.034 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:49.293 "name": "NewBaseBdev", 00:21:49.293 "aliases": [ 00:21:49.293 "58b55e64-428f-11ef-a0af-c98d8ee52a94" 00:21:49.293 ], 00:21:49.293 "product_name": "Malloc disk", 00:21:49.293 "block_size": 512, 00:21:49.293 "num_blocks": 65536, 00:21:49.293 "uuid": "58b55e64-428f-11ef-a0af-c98d8ee52a94", 00:21:49.293 "assigned_rate_limits": { 00:21:49.293 "rw_ios_per_sec": 0, 00:21:49.293 "rw_mbytes_per_sec": 0, 00:21:49.293 "r_mbytes_per_sec": 0, 00:21:49.293 "w_mbytes_per_sec": 0 00:21:49.293 }, 00:21:49.293 "claimed": true, 00:21:49.293 "claim_type": "exclusive_write", 00:21:49.293 "zoned": false, 00:21:49.293 "supported_io_types": { 00:21:49.293 "read": true, 00:21:49.293 "write": true, 00:21:49.293 "unmap": true, 00:21:49.293 "flush": true, 00:21:49.293 "reset": true, 00:21:49.293 "nvme_admin": false, 00:21:49.293 "nvme_io": false, 00:21:49.293 "nvme_io_md": false, 00:21:49.293 "write_zeroes": true, 00:21:49.293 "zcopy": true, 00:21:49.293 "get_zone_info": false, 00:21:49.293 "zone_management": false, 00:21:49.293 "zone_append": false, 00:21:49.293 "compare": false, 00:21:49.293 "compare_and_write": false, 00:21:49.293 "abort": true, 00:21:49.293 "seek_hole": false, 00:21:49.293 "seek_data": false, 00:21:49.293 "copy": true, 00:21:49.293 "nvme_iov_md": false 00:21:49.293 }, 00:21:49.293 "memory_domains": [ 00:21:49.293 { 00:21:49.293 "dma_device_id": "system", 00:21:49.293 "dma_device_type": 1 00:21:49.293 }, 00:21:49.293 { 00:21:49.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.293 "dma_device_type": 2 00:21:49.293 } 00:21:49.293 ], 00:21:49.293 "driver_specific": {} 00:21:49.293 }' 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:49.293 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:49.552 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:49.552 "name": "BaseBdev2", 00:21:49.552 "aliases": [ 00:21:49.552 "56f1ae8b-428f-11ef-a0af-c98d8ee52a94" 00:21:49.552 ], 00:21:49.552 "product_name": "Malloc disk", 00:21:49.552 "block_size": 512, 00:21:49.552 "num_blocks": 65536, 00:21:49.552 "uuid": "56f1ae8b-428f-11ef-a0af-c98d8ee52a94", 00:21:49.552 "assigned_rate_limits": { 00:21:49.552 "rw_ios_per_sec": 0, 00:21:49.553 "rw_mbytes_per_sec": 0, 00:21:49.553 "r_mbytes_per_sec": 0, 00:21:49.553 "w_mbytes_per_sec": 0 00:21:49.553 }, 00:21:49.553 "claimed": true, 00:21:49.553 "claim_type": "exclusive_write", 00:21:49.553 "zoned": false, 00:21:49.553 "supported_io_types": { 00:21:49.553 "read": true, 00:21:49.553 "write": true, 00:21:49.553 "unmap": true, 00:21:49.553 "flush": true, 00:21:49.553 "reset": true, 00:21:49.553 "nvme_admin": false, 00:21:49.553 "nvme_io": false, 00:21:49.553 "nvme_io_md": false, 00:21:49.553 "write_zeroes": true, 00:21:49.553 "zcopy": true, 00:21:49.553 "get_zone_info": false, 00:21:49.553 "zone_management": false, 00:21:49.553 "zone_append": false, 00:21:49.553 "compare": false, 00:21:49.553 "compare_and_write": false, 00:21:49.553 "abort": true, 00:21:49.553 "seek_hole": false, 00:21:49.553 "seek_data": false, 00:21:49.553 "copy": true, 00:21:49.553 "nvme_iov_md": false 00:21:49.553 }, 00:21:49.553 "memory_domains": [ 00:21:49.553 { 00:21:49.553 "dma_device_id": "system", 00:21:49.553 "dma_device_type": 1 00:21:49.553 }, 00:21:49.553 { 00:21:49.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.553 "dma_device_type": 2 00:21:49.553 } 00:21:49.553 ], 00:21:49.553 "driver_specific": {} 00:21:49.553 }' 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:49.553 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:49.812 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:49.812 "name": "BaseBdev3", 00:21:49.812 "aliases": [ 00:21:49.812 "575357da-428f-11ef-a0af-c98d8ee52a94" 00:21:49.812 ], 00:21:49.812 "product_name": "Malloc disk", 00:21:49.812 "block_size": 512, 00:21:49.812 "num_blocks": 65536, 00:21:49.812 "uuid": "575357da-428f-11ef-a0af-c98d8ee52a94", 00:21:49.812 "assigned_rate_limits": { 00:21:49.812 "rw_ios_per_sec": 0, 00:21:49.812 "rw_mbytes_per_sec": 0, 00:21:49.812 "r_mbytes_per_sec": 0, 00:21:49.812 "w_mbytes_per_sec": 0 00:21:49.812 }, 00:21:49.812 "claimed": true, 00:21:49.812 "claim_type": "exclusive_write", 00:21:49.812 "zoned": false, 00:21:49.813 "supported_io_types": { 00:21:49.813 "read": true, 00:21:49.813 "write": true, 00:21:49.813 "unmap": true, 00:21:49.813 "flush": true, 00:21:49.813 "reset": true, 00:21:49.813 "nvme_admin": false, 00:21:49.813 "nvme_io": false, 00:21:49.813 "nvme_io_md": false, 00:21:49.813 "write_zeroes": true, 00:21:49.813 "zcopy": true, 00:21:49.813 "get_zone_info": false, 00:21:49.813 "zone_management": false, 00:21:49.813 "zone_append": false, 00:21:49.813 "compare": false, 00:21:49.813 "compare_and_write": false, 00:21:49.813 "abort": true, 00:21:49.813 "seek_hole": false, 00:21:49.813 "seek_data": false, 00:21:49.813 "copy": true, 00:21:49.813 "nvme_iov_md": false 00:21:49.813 }, 00:21:49.813 "memory_domains": [ 00:21:49.813 { 00:21:49.813 "dma_device_id": "system", 00:21:49.813 "dma_device_type": 1 00:21:49.813 }, 00:21:49.813 { 00:21:49.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.813 "dma_device_type": 2 00:21:49.813 } 00:21:49.813 ], 00:21:49.813 "driver_specific": {} 00:21:49.813 }' 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.813 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:50.072 [2024-07-15 09:48:18.094008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.072 [2024-07-15 09:48:18.094036] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:50.072 [2024-07-15 09:48:18.094053] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.072 [2024-07-15 09:48:18.094066] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.072 [2024-07-15 09:48:18.094069] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b5f00034a00 name Existed_Raid, state offline 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 53956 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 53956 ']' 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 53956 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 53956 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:21:50.072 killing process with pid 53956 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53956' 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 53956 00:21:50.072 [2024-07-15 09:48:18.126536] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:50.072 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 53956 00:21:50.072 [2024-07-15 09:48:18.152950] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.331 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:50.331 00:21:50.331 real 0m20.678s 00:21:50.331 user 0m36.795s 00:21:50.331 sys 0m3.785s 00:21:50.331 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.331 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.331 ************************************ 00:21:50.331 END TEST raid_state_function_test 00:21:50.332 ************************************ 00:21:50.591 09:48:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:50.591 09:48:18 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:21:50.591 09:48:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:50.591 09:48:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.591 09:48:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.591 ************************************ 00:21:50.591 START TEST raid_state_function_test_sb 00:21:50.591 ************************************ 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.591 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54673 00:21:50.592 Process raid pid: 54673 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54673' 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54673 /var/tmp/spdk-raid.sock 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54673 ']' 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.592 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.592 [2024-07-15 09:48:18.483699] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:50.592 [2024-07-15 09:48:18.484012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:21:51.218 EAL: TSC is not safe to use in SMP mode 00:21:51.218 EAL: TSC is not invariant 00:21:51.218 [2024-07-15 09:48:19.195068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.218 [2024-07-15 09:48:19.300901] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:51.218 [2024-07-15 09:48:19.303490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.218 [2024-07-15 09:48:19.304206] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.218 [2024-07-15 09:48:19.304216] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.478 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.478 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:51.478 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:51.737 [2024-07-15 09:48:19.691421] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:51.737 [2024-07-15 09:48:19.691480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:51.737 [2024-07-15 09:48:19.691484] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:51.737 [2024-07-15 09:48:19.691491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:51.737 [2024-07-15 09:48:19.691494] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:51.737 [2024-07-15 09:48:19.691500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.737 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.996 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.996 "name": "Existed_Raid", 00:21:51.996 "uuid": "5e04f3d6-428f-11ef-a0af-c98d8ee52a94", 00:21:51.996 "strip_size_kb": 64, 00:21:51.996 "state": "configuring", 00:21:51.996 "raid_level": "concat", 00:21:51.996 "superblock": true, 00:21:51.996 "num_base_bdevs": 3, 00:21:51.996 "num_base_bdevs_discovered": 0, 00:21:51.996 "num_base_bdevs_operational": 3, 00:21:51.996 "base_bdevs_list": [ 00:21:51.996 { 00:21:51.996 "name": "BaseBdev1", 00:21:51.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.996 "is_configured": false, 00:21:51.996 "data_offset": 0, 00:21:51.996 "data_size": 0 00:21:51.996 }, 00:21:51.996 { 00:21:51.996 "name": "BaseBdev2", 00:21:51.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.996 "is_configured": false, 00:21:51.996 "data_offset": 0, 00:21:51.996 "data_size": 0 00:21:51.996 }, 00:21:51.996 { 00:21:51.996 "name": "BaseBdev3", 00:21:51.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.996 "is_configured": false, 00:21:51.996 "data_offset": 0, 00:21:51.996 "data_size": 0 00:21:51.996 } 00:21:51.996 ] 00:21:51.996 }' 00:21:51.996 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.996 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.256 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:52.515 [2024-07-15 09:48:20.423441] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.515 [2024-07-15 09:48:20.423468] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33d6d8634500 name Existed_Raid, state configuring 00:21:52.515 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:52.774 [2024-07-15 09:48:20.623461] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:52.774 [2024-07-15 09:48:20.623511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:52.774 [2024-07-15 09:48:20.623515] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:52.774 [2024-07-15 09:48:20.623521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:52.774 [2024-07-15 09:48:20.623524] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:52.775 [2024-07-15 09:48:20.623530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:52.775 [2024-07-15 09:48:20.828612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:52.775 BaseBdev1 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:52.775 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.035 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.295 [ 00:21:53.295 { 00:21:53.295 "name": "BaseBdev1", 00:21:53.295 "aliases": [ 00:21:53.295 "5eb24d3d-428f-11ef-a0af-c98d8ee52a94" 00:21:53.295 ], 00:21:53.295 "product_name": "Malloc disk", 00:21:53.295 "block_size": 512, 00:21:53.295 "num_blocks": 65536, 00:21:53.295 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:53.295 "assigned_rate_limits": { 00:21:53.295 "rw_ios_per_sec": 0, 00:21:53.295 "rw_mbytes_per_sec": 0, 00:21:53.295 "r_mbytes_per_sec": 0, 00:21:53.295 "w_mbytes_per_sec": 0 00:21:53.295 }, 00:21:53.295 "claimed": true, 00:21:53.295 "claim_type": "exclusive_write", 00:21:53.295 "zoned": false, 00:21:53.295 "supported_io_types": { 00:21:53.295 "read": true, 00:21:53.295 "write": true, 00:21:53.295 "unmap": true, 00:21:53.295 "flush": true, 00:21:53.295 "reset": true, 00:21:53.295 "nvme_admin": false, 00:21:53.295 "nvme_io": false, 00:21:53.295 "nvme_io_md": false, 00:21:53.295 "write_zeroes": true, 00:21:53.295 "zcopy": true, 00:21:53.295 "get_zone_info": false, 00:21:53.295 "zone_management": false, 00:21:53.295 "zone_append": false, 00:21:53.295 "compare": false, 00:21:53.295 "compare_and_write": false, 00:21:53.295 "abort": true, 00:21:53.295 "seek_hole": false, 00:21:53.295 "seek_data": false, 00:21:53.295 "copy": true, 00:21:53.295 "nvme_iov_md": false 00:21:53.295 }, 00:21:53.295 "memory_domains": [ 00:21:53.295 { 00:21:53.295 "dma_device_id": "system", 00:21:53.295 "dma_device_type": 1 00:21:53.295 }, 00:21:53.295 { 00:21:53.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.295 "dma_device_type": 2 00:21:53.295 } 00:21:53.295 ], 00:21:53.295 "driver_specific": {} 00:21:53.295 } 00:21:53.295 ] 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.295 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.555 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.555 "name": "Existed_Raid", 00:21:53.555 "uuid": "5e932bcd-428f-11ef-a0af-c98d8ee52a94", 00:21:53.555 "strip_size_kb": 64, 00:21:53.555 "state": "configuring", 00:21:53.555 "raid_level": "concat", 00:21:53.555 "superblock": true, 00:21:53.555 "num_base_bdevs": 3, 00:21:53.555 "num_base_bdevs_discovered": 1, 00:21:53.555 "num_base_bdevs_operational": 3, 00:21:53.555 "base_bdevs_list": [ 00:21:53.555 { 00:21:53.555 "name": "BaseBdev1", 00:21:53.555 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:53.555 "is_configured": true, 00:21:53.555 "data_offset": 2048, 00:21:53.555 "data_size": 63488 00:21:53.555 }, 00:21:53.555 { 00:21:53.555 "name": "BaseBdev2", 00:21:53.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.555 "is_configured": false, 00:21:53.555 "data_offset": 0, 00:21:53.555 "data_size": 0 00:21:53.555 }, 00:21:53.555 { 00:21:53.555 "name": "BaseBdev3", 00:21:53.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.555 "is_configured": false, 00:21:53.555 "data_offset": 0, 00:21:53.555 "data_size": 0 00:21:53.555 } 00:21:53.555 ] 00:21:53.555 }' 00:21:53.555 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.555 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.865 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:54.124 [2024-07-15 09:48:22.107536] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.124 [2024-07-15 09:48:22.107565] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33d6d8634500 name Existed_Raid, state configuring 00:21:54.124 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:54.384 [2024-07-15 09:48:22.335572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.384 [2024-07-15 09:48:22.336498] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.384 [2024-07-15 09:48:22.336541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.384 [2024-07-15 09:48:22.336546] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:54.384 [2024-07-15 09:48:22.336553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.384 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.643 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.643 "name": "Existed_Raid", 00:21:54.643 "uuid": "5f986b0a-428f-11ef-a0af-c98d8ee52a94", 00:21:54.643 "strip_size_kb": 64, 00:21:54.643 "state": "configuring", 00:21:54.643 "raid_level": "concat", 00:21:54.643 "superblock": true, 00:21:54.643 "num_base_bdevs": 3, 00:21:54.643 "num_base_bdevs_discovered": 1, 00:21:54.643 "num_base_bdevs_operational": 3, 00:21:54.643 "base_bdevs_list": [ 00:21:54.643 { 00:21:54.643 "name": "BaseBdev1", 00:21:54.643 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:54.643 "is_configured": true, 00:21:54.643 "data_offset": 2048, 00:21:54.643 "data_size": 63488 00:21:54.643 }, 00:21:54.643 { 00:21:54.643 "name": "BaseBdev2", 00:21:54.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.643 "is_configured": false, 00:21:54.643 "data_offset": 0, 00:21:54.643 "data_size": 0 00:21:54.643 }, 00:21:54.643 { 00:21:54.643 "name": "BaseBdev3", 00:21:54.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.643 "is_configured": false, 00:21:54.643 "data_offset": 0, 00:21:54.643 "data_size": 0 00:21:54.643 } 00:21:54.643 ] 00:21:54.643 }' 00:21:54.643 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.643 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.902 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:55.161 [2024-07-15 09:48:23.071742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.161 BaseBdev2 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:55.161 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:55.419 [ 00:21:55.419 { 00:21:55.419 "name": "BaseBdev2", 00:21:55.419 "aliases": [ 00:21:55.419 "6008babc-428f-11ef-a0af-c98d8ee52a94" 00:21:55.419 ], 00:21:55.419 "product_name": "Malloc disk", 00:21:55.419 "block_size": 512, 00:21:55.419 "num_blocks": 65536, 00:21:55.419 "uuid": "6008babc-428f-11ef-a0af-c98d8ee52a94", 00:21:55.419 "assigned_rate_limits": { 00:21:55.419 "rw_ios_per_sec": 0, 00:21:55.419 "rw_mbytes_per_sec": 0, 00:21:55.419 "r_mbytes_per_sec": 0, 00:21:55.419 "w_mbytes_per_sec": 0 00:21:55.419 }, 00:21:55.419 "claimed": true, 00:21:55.419 "claim_type": "exclusive_write", 00:21:55.419 "zoned": false, 00:21:55.419 "supported_io_types": { 00:21:55.419 "read": true, 00:21:55.419 "write": true, 00:21:55.419 "unmap": true, 00:21:55.419 "flush": true, 00:21:55.419 "reset": true, 00:21:55.419 "nvme_admin": false, 00:21:55.419 "nvme_io": false, 00:21:55.419 "nvme_io_md": false, 00:21:55.419 "write_zeroes": true, 00:21:55.419 "zcopy": true, 00:21:55.419 "get_zone_info": false, 00:21:55.419 "zone_management": false, 00:21:55.419 "zone_append": false, 00:21:55.419 "compare": false, 00:21:55.419 "compare_and_write": false, 00:21:55.419 "abort": true, 00:21:55.419 "seek_hole": false, 00:21:55.419 "seek_data": false, 00:21:55.419 "copy": true, 00:21:55.419 "nvme_iov_md": false 00:21:55.419 }, 00:21:55.419 "memory_domains": [ 00:21:55.419 { 00:21:55.419 "dma_device_id": "system", 00:21:55.419 "dma_device_type": 1 00:21:55.419 }, 00:21:55.419 { 00:21:55.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.419 "dma_device_type": 2 00:21:55.419 } 00:21:55.419 ], 00:21:55.419 "driver_specific": {} 00:21:55.419 } 00:21:55.419 ] 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.419 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.678 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.678 "name": "Existed_Raid", 00:21:55.678 "uuid": "5f986b0a-428f-11ef-a0af-c98d8ee52a94", 00:21:55.678 "strip_size_kb": 64, 00:21:55.678 "state": "configuring", 00:21:55.678 "raid_level": "concat", 00:21:55.678 "superblock": true, 00:21:55.678 "num_base_bdevs": 3, 00:21:55.678 "num_base_bdevs_discovered": 2, 00:21:55.678 "num_base_bdevs_operational": 3, 00:21:55.678 "base_bdevs_list": [ 00:21:55.678 { 00:21:55.678 "name": "BaseBdev1", 00:21:55.678 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:55.678 "is_configured": true, 00:21:55.678 "data_offset": 2048, 00:21:55.678 "data_size": 63488 00:21:55.678 }, 00:21:55.678 { 00:21:55.678 "name": "BaseBdev2", 00:21:55.678 "uuid": "6008babc-428f-11ef-a0af-c98d8ee52a94", 00:21:55.678 "is_configured": true, 00:21:55.678 "data_offset": 2048, 00:21:55.678 "data_size": 63488 00:21:55.678 }, 00:21:55.678 { 00:21:55.678 "name": "BaseBdev3", 00:21:55.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.678 "is_configured": false, 00:21:55.678 "data_offset": 0, 00:21:55.678 "data_size": 0 00:21:55.678 } 00:21:55.678 ] 00:21:55.678 }' 00:21:55.678 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.678 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:55.937 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:56.195 [2024-07-15 09:48:24.159758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:56.196 [2024-07-15 09:48:24.159815] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33d6d8634a00 00:21:56.196 [2024-07-15 09:48:24.159820] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:56.196 [2024-07-15 09:48:24.159837] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33d6d8697e20 00:21:56.196 [2024-07-15 09:48:24.159875] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33d6d8634a00 00:21:56.196 [2024-07-15 09:48:24.159878] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x33d6d8634a00 00:21:56.196 [2024-07-15 09:48:24.159894] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.196 BaseBdev3 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:56.196 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.454 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:56.713 [ 00:21:56.713 { 00:21:56.713 "name": "BaseBdev3", 00:21:56.713 "aliases": [ 00:21:56.713 "60aec10b-428f-11ef-a0af-c98d8ee52a94" 00:21:56.713 ], 00:21:56.713 "product_name": "Malloc disk", 00:21:56.713 "block_size": 512, 00:21:56.713 "num_blocks": 65536, 00:21:56.713 "uuid": "60aec10b-428f-11ef-a0af-c98d8ee52a94", 00:21:56.713 "assigned_rate_limits": { 00:21:56.713 "rw_ios_per_sec": 0, 00:21:56.713 "rw_mbytes_per_sec": 0, 00:21:56.713 "r_mbytes_per_sec": 0, 00:21:56.713 "w_mbytes_per_sec": 0 00:21:56.713 }, 00:21:56.713 "claimed": true, 00:21:56.713 "claim_type": "exclusive_write", 00:21:56.713 "zoned": false, 00:21:56.713 "supported_io_types": { 00:21:56.713 "read": true, 00:21:56.713 "write": true, 00:21:56.713 "unmap": true, 00:21:56.713 "flush": true, 00:21:56.713 "reset": true, 00:21:56.713 "nvme_admin": false, 00:21:56.713 "nvme_io": false, 00:21:56.713 "nvme_io_md": false, 00:21:56.713 "write_zeroes": true, 00:21:56.713 "zcopy": true, 00:21:56.713 "get_zone_info": false, 00:21:56.713 "zone_management": false, 00:21:56.713 "zone_append": false, 00:21:56.713 "compare": false, 00:21:56.713 "compare_and_write": false, 00:21:56.713 "abort": true, 00:21:56.713 "seek_hole": false, 00:21:56.713 "seek_data": false, 00:21:56.713 "copy": true, 00:21:56.713 "nvme_iov_md": false 00:21:56.713 }, 00:21:56.713 "memory_domains": [ 00:21:56.713 { 00:21:56.713 "dma_device_id": "system", 00:21:56.713 "dma_device_type": 1 00:21:56.713 }, 00:21:56.713 { 00:21:56.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.713 "dma_device_type": 2 00:21:56.713 } 00:21:56.713 ], 00:21:56.713 "driver_specific": {} 00:21:56.713 } 00:21:56.713 ] 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.713 "name": "Existed_Raid", 00:21:56.713 "uuid": "5f986b0a-428f-11ef-a0af-c98d8ee52a94", 00:21:56.713 "strip_size_kb": 64, 00:21:56.713 "state": "online", 00:21:56.713 "raid_level": "concat", 00:21:56.713 "superblock": true, 00:21:56.713 "num_base_bdevs": 3, 00:21:56.713 "num_base_bdevs_discovered": 3, 00:21:56.713 "num_base_bdevs_operational": 3, 00:21:56.713 "base_bdevs_list": [ 00:21:56.713 { 00:21:56.713 "name": "BaseBdev1", 00:21:56.713 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:56.713 "is_configured": true, 00:21:56.713 "data_offset": 2048, 00:21:56.713 "data_size": 63488 00:21:56.713 }, 00:21:56.713 { 00:21:56.713 "name": "BaseBdev2", 00:21:56.713 "uuid": "6008babc-428f-11ef-a0af-c98d8ee52a94", 00:21:56.713 "is_configured": true, 00:21:56.713 "data_offset": 2048, 00:21:56.713 "data_size": 63488 00:21:56.713 }, 00:21:56.713 { 00:21:56.713 "name": "BaseBdev3", 00:21:56.713 "uuid": "60aec10b-428f-11ef-a0af-c98d8ee52a94", 00:21:56.713 "is_configured": true, 00:21:56.713 "data_offset": 2048, 00:21:56.713 "data_size": 63488 00:21:56.713 } 00:21:56.713 ] 00:21:56.713 }' 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:56.971 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:57.229 [2024-07-15 09:48:25.295729] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.229 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:57.229 "name": "Existed_Raid", 00:21:57.229 "aliases": [ 00:21:57.229 "5f986b0a-428f-11ef-a0af-c98d8ee52a94" 00:21:57.229 ], 00:21:57.229 "product_name": "Raid Volume", 00:21:57.229 "block_size": 512, 00:21:57.229 "num_blocks": 190464, 00:21:57.229 "uuid": "5f986b0a-428f-11ef-a0af-c98d8ee52a94", 00:21:57.229 "assigned_rate_limits": { 00:21:57.229 "rw_ios_per_sec": 0, 00:21:57.229 "rw_mbytes_per_sec": 0, 00:21:57.229 "r_mbytes_per_sec": 0, 00:21:57.229 "w_mbytes_per_sec": 0 00:21:57.229 }, 00:21:57.229 "claimed": false, 00:21:57.229 "zoned": false, 00:21:57.229 "supported_io_types": { 00:21:57.229 "read": true, 00:21:57.229 "write": true, 00:21:57.229 "unmap": true, 00:21:57.229 "flush": true, 00:21:57.229 "reset": true, 00:21:57.229 "nvme_admin": false, 00:21:57.229 "nvme_io": false, 00:21:57.229 "nvme_io_md": false, 00:21:57.229 "write_zeroes": true, 00:21:57.229 "zcopy": false, 00:21:57.229 "get_zone_info": false, 00:21:57.229 "zone_management": false, 00:21:57.229 "zone_append": false, 00:21:57.229 "compare": false, 00:21:57.229 "compare_and_write": false, 00:21:57.229 "abort": false, 00:21:57.229 "seek_hole": false, 00:21:57.229 "seek_data": false, 00:21:57.229 "copy": false, 00:21:57.229 "nvme_iov_md": false 00:21:57.229 }, 00:21:57.229 "memory_domains": [ 00:21:57.229 { 00:21:57.229 "dma_device_id": "system", 00:21:57.229 "dma_device_type": 1 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.229 "dma_device_type": 2 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "dma_device_id": "system", 00:21:57.229 "dma_device_type": 1 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.229 "dma_device_type": 2 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "dma_device_id": "system", 00:21:57.229 "dma_device_type": 1 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.229 "dma_device_type": 2 00:21:57.229 } 00:21:57.229 ], 00:21:57.229 "driver_specific": { 00:21:57.229 "raid": { 00:21:57.229 "uuid": "5f986b0a-428f-11ef-a0af-c98d8ee52a94", 00:21:57.229 "strip_size_kb": 64, 00:21:57.229 "state": "online", 00:21:57.229 "raid_level": "concat", 00:21:57.229 "superblock": true, 00:21:57.229 "num_base_bdevs": 3, 00:21:57.229 "num_base_bdevs_discovered": 3, 00:21:57.229 "num_base_bdevs_operational": 3, 00:21:57.229 "base_bdevs_list": [ 00:21:57.229 { 00:21:57.229 "name": "BaseBdev1", 00:21:57.229 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:57.229 "is_configured": true, 00:21:57.229 "data_offset": 2048, 00:21:57.229 "data_size": 63488 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "name": "BaseBdev2", 00:21:57.229 "uuid": "6008babc-428f-11ef-a0af-c98d8ee52a94", 00:21:57.229 "is_configured": true, 00:21:57.229 "data_offset": 2048, 00:21:57.229 "data_size": 63488 00:21:57.229 }, 00:21:57.229 { 00:21:57.229 "name": "BaseBdev3", 00:21:57.229 "uuid": "60aec10b-428f-11ef-a0af-c98d8ee52a94", 00:21:57.229 "is_configured": true, 00:21:57.229 "data_offset": 2048, 00:21:57.229 "data_size": 63488 00:21:57.229 } 00:21:57.229 ] 00:21:57.229 } 00:21:57.229 } 00:21:57.229 }' 00:21:57.229 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:57.229 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:57.229 BaseBdev2 00:21:57.229 BaseBdev3' 00:21:57.229 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:57.229 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:57.229 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:57.494 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:57.494 "name": "BaseBdev1", 00:21:57.494 "aliases": [ 00:21:57.494 "5eb24d3d-428f-11ef-a0af-c98d8ee52a94" 00:21:57.494 ], 00:21:57.494 "product_name": "Malloc disk", 00:21:57.494 "block_size": 512, 00:21:57.494 "num_blocks": 65536, 00:21:57.494 "uuid": "5eb24d3d-428f-11ef-a0af-c98d8ee52a94", 00:21:57.494 "assigned_rate_limits": { 00:21:57.495 "rw_ios_per_sec": 0, 00:21:57.495 "rw_mbytes_per_sec": 0, 00:21:57.495 "r_mbytes_per_sec": 0, 00:21:57.495 "w_mbytes_per_sec": 0 00:21:57.495 }, 00:21:57.495 "claimed": true, 00:21:57.495 "claim_type": "exclusive_write", 00:21:57.495 "zoned": false, 00:21:57.495 "supported_io_types": { 00:21:57.495 "read": true, 00:21:57.495 "write": true, 00:21:57.495 "unmap": true, 00:21:57.495 "flush": true, 00:21:57.495 "reset": true, 00:21:57.495 "nvme_admin": false, 00:21:57.495 "nvme_io": false, 00:21:57.495 "nvme_io_md": false, 00:21:57.495 "write_zeroes": true, 00:21:57.495 "zcopy": true, 00:21:57.495 "get_zone_info": false, 00:21:57.495 "zone_management": false, 00:21:57.495 "zone_append": false, 00:21:57.495 "compare": false, 00:21:57.495 "compare_and_write": false, 00:21:57.495 "abort": true, 00:21:57.495 "seek_hole": false, 00:21:57.495 "seek_data": false, 00:21:57.495 "copy": true, 00:21:57.495 "nvme_iov_md": false 00:21:57.495 }, 00:21:57.495 "memory_domains": [ 00:21:57.495 { 00:21:57.495 "dma_device_id": "system", 00:21:57.495 "dma_device_type": 1 00:21:57.495 }, 00:21:57.495 { 00:21:57.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.495 "dma_device_type": 2 00:21:57.495 } 00:21:57.495 ], 00:21:57.495 "driver_specific": {} 00:21:57.495 }' 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.495 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:57.753 "name": "BaseBdev2", 00:21:57.753 "aliases": [ 00:21:57.753 "6008babc-428f-11ef-a0af-c98d8ee52a94" 00:21:57.753 ], 00:21:57.753 "product_name": "Malloc disk", 00:21:57.753 "block_size": 512, 00:21:57.753 "num_blocks": 65536, 00:21:57.753 "uuid": "6008babc-428f-11ef-a0af-c98d8ee52a94", 00:21:57.753 "assigned_rate_limits": { 00:21:57.753 "rw_ios_per_sec": 0, 00:21:57.753 "rw_mbytes_per_sec": 0, 00:21:57.753 "r_mbytes_per_sec": 0, 00:21:57.753 "w_mbytes_per_sec": 0 00:21:57.753 }, 00:21:57.753 "claimed": true, 00:21:57.753 "claim_type": "exclusive_write", 00:21:57.753 "zoned": false, 00:21:57.753 "supported_io_types": { 00:21:57.753 "read": true, 00:21:57.753 "write": true, 00:21:57.753 "unmap": true, 00:21:57.753 "flush": true, 00:21:57.753 "reset": true, 00:21:57.753 "nvme_admin": false, 00:21:57.753 "nvme_io": false, 00:21:57.753 "nvme_io_md": false, 00:21:57.753 "write_zeroes": true, 00:21:57.753 "zcopy": true, 00:21:57.753 "get_zone_info": false, 00:21:57.753 "zone_management": false, 00:21:57.753 "zone_append": false, 00:21:57.753 "compare": false, 00:21:57.753 "compare_and_write": false, 00:21:57.753 "abort": true, 00:21:57.753 "seek_hole": false, 00:21:57.753 "seek_data": false, 00:21:57.753 "copy": true, 00:21:57.753 "nvme_iov_md": false 00:21:57.753 }, 00:21:57.753 "memory_domains": [ 00:21:57.753 { 00:21:57.753 "dma_device_id": "system", 00:21:57.753 "dma_device_type": 1 00:21:57.753 }, 00:21:57.753 { 00:21:57.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.753 "dma_device_type": 2 00:21:57.753 } 00:21:57.753 ], 00:21:57.753 "driver_specific": {} 00:21:57.753 }' 00:21:57.753 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:58.011 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.269 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.269 "name": "BaseBdev3", 00:21:58.269 "aliases": [ 00:21:58.269 "60aec10b-428f-11ef-a0af-c98d8ee52a94" 00:21:58.269 ], 00:21:58.269 "product_name": "Malloc disk", 00:21:58.269 "block_size": 512, 00:21:58.270 "num_blocks": 65536, 00:21:58.270 "uuid": "60aec10b-428f-11ef-a0af-c98d8ee52a94", 00:21:58.270 "assigned_rate_limits": { 00:21:58.270 "rw_ios_per_sec": 0, 00:21:58.270 "rw_mbytes_per_sec": 0, 00:21:58.270 "r_mbytes_per_sec": 0, 00:21:58.270 "w_mbytes_per_sec": 0 00:21:58.270 }, 00:21:58.270 "claimed": true, 00:21:58.270 "claim_type": "exclusive_write", 00:21:58.270 "zoned": false, 00:21:58.270 "supported_io_types": { 00:21:58.270 "read": true, 00:21:58.270 "write": true, 00:21:58.270 "unmap": true, 00:21:58.270 "flush": true, 00:21:58.270 "reset": true, 00:21:58.270 "nvme_admin": false, 00:21:58.270 "nvme_io": false, 00:21:58.270 "nvme_io_md": false, 00:21:58.270 "write_zeroes": true, 00:21:58.270 "zcopy": true, 00:21:58.270 "get_zone_info": false, 00:21:58.270 "zone_management": false, 00:21:58.270 "zone_append": false, 00:21:58.270 "compare": false, 00:21:58.270 "compare_and_write": false, 00:21:58.270 "abort": true, 00:21:58.270 "seek_hole": false, 00:21:58.270 "seek_data": false, 00:21:58.270 "copy": true, 00:21:58.270 "nvme_iov_md": false 00:21:58.270 }, 00:21:58.270 "memory_domains": [ 00:21:58.270 { 00:21:58.270 "dma_device_id": "system", 00:21:58.270 "dma_device_type": 1 00:21:58.270 }, 00:21:58.270 { 00:21:58.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.270 "dma_device_type": 2 00:21:58.270 } 00:21:58.270 ], 00:21:58.270 "driver_specific": {} 00:21:58.270 }' 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.270 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:58.528 [2024-07-15 09:48:26.427812] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.528 [2024-07-15 09:48:26.427842] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:58.528 [2024-07-15 09:48:26.427867] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.528 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.787 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.787 "name": "Existed_Raid", 00:21:58.787 "uuid": "5f986b0a-428f-11ef-a0af-c98d8ee52a94", 00:21:58.787 "strip_size_kb": 64, 00:21:58.787 "state": "offline", 00:21:58.787 "raid_level": "concat", 00:21:58.787 "superblock": true, 00:21:58.787 "num_base_bdevs": 3, 00:21:58.787 "num_base_bdevs_discovered": 2, 00:21:58.787 "num_base_bdevs_operational": 2, 00:21:58.787 "base_bdevs_list": [ 00:21:58.787 { 00:21:58.787 "name": null, 00:21:58.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.787 "is_configured": false, 00:21:58.787 "data_offset": 2048, 00:21:58.787 "data_size": 63488 00:21:58.787 }, 00:21:58.787 { 00:21:58.787 "name": "BaseBdev2", 00:21:58.787 "uuid": "6008babc-428f-11ef-a0af-c98d8ee52a94", 00:21:58.787 "is_configured": true, 00:21:58.787 "data_offset": 2048, 00:21:58.787 "data_size": 63488 00:21:58.787 }, 00:21:58.787 { 00:21:58.787 "name": "BaseBdev3", 00:21:58.787 "uuid": "60aec10b-428f-11ef-a0af-c98d8ee52a94", 00:21:58.787 "is_configured": true, 00:21:58.787 "data_offset": 2048, 00:21:58.787 "data_size": 63488 00:21:58.787 } 00:21:58.787 ] 00:21:58.787 }' 00:21:58.787 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.787 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.046 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:59.046 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:59.046 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.046 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:59.046 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:59.046 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.305 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:59.305 [2024-07-15 09:48:27.332244] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.305 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:59.305 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:59.305 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.305 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:59.564 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:59.564 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.564 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:59.822 [2024-07-15 09:48:27.736813] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:59.822 [2024-07-15 09:48:27.736843] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33d6d8634a00 name Existed_Raid, state offline 00:21:59.822 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:59.822 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:59.822 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.822 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:00.080 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:00.080 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:00.080 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:00.080 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:00.080 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:00.080 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:00.347 BaseBdev2 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.347 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:00.605 [ 00:22:00.605 { 00:22:00.605 "name": "BaseBdev2", 00:22:00.605 "aliases": [ 00:22:00.605 "631508cc-428f-11ef-a0af-c98d8ee52a94" 00:22:00.605 ], 00:22:00.605 "product_name": "Malloc disk", 00:22:00.605 "block_size": 512, 00:22:00.605 "num_blocks": 65536, 00:22:00.605 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:00.605 "assigned_rate_limits": { 00:22:00.605 "rw_ios_per_sec": 0, 00:22:00.605 "rw_mbytes_per_sec": 0, 00:22:00.605 "r_mbytes_per_sec": 0, 00:22:00.605 "w_mbytes_per_sec": 0 00:22:00.605 }, 00:22:00.605 "claimed": false, 00:22:00.605 "zoned": false, 00:22:00.605 "supported_io_types": { 00:22:00.605 "read": true, 00:22:00.605 "write": true, 00:22:00.605 "unmap": true, 00:22:00.605 "flush": true, 00:22:00.605 "reset": true, 00:22:00.605 "nvme_admin": false, 00:22:00.605 "nvme_io": false, 00:22:00.605 "nvme_io_md": false, 00:22:00.605 "write_zeroes": true, 00:22:00.605 "zcopy": true, 00:22:00.605 "get_zone_info": false, 00:22:00.605 "zone_management": false, 00:22:00.605 "zone_append": false, 00:22:00.605 "compare": false, 00:22:00.605 "compare_and_write": false, 00:22:00.605 "abort": true, 00:22:00.605 "seek_hole": false, 00:22:00.605 "seek_data": false, 00:22:00.605 "copy": true, 00:22:00.605 "nvme_iov_md": false 00:22:00.605 }, 00:22:00.605 "memory_domains": [ 00:22:00.605 { 00:22:00.605 "dma_device_id": "system", 00:22:00.605 "dma_device_type": 1 00:22:00.605 }, 00:22:00.605 { 00:22:00.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.605 "dma_device_type": 2 00:22:00.605 } 00:22:00.605 ], 00:22:00.605 "driver_specific": {} 00:22:00.605 } 00:22:00.605 ] 00:22:00.605 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:00.605 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:00.605 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:00.605 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:00.864 BaseBdev3 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:00.864 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.123 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:01.123 [ 00:22:01.123 { 00:22:01.123 "name": "BaseBdev3", 00:22:01.123 "aliases": [ 00:22:01.123 "637440e6-428f-11ef-a0af-c98d8ee52a94" 00:22:01.123 ], 00:22:01.123 "product_name": "Malloc disk", 00:22:01.123 "block_size": 512, 00:22:01.123 "num_blocks": 65536, 00:22:01.123 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:01.123 "assigned_rate_limits": { 00:22:01.123 "rw_ios_per_sec": 0, 00:22:01.123 "rw_mbytes_per_sec": 0, 00:22:01.123 "r_mbytes_per_sec": 0, 00:22:01.123 "w_mbytes_per_sec": 0 00:22:01.123 }, 00:22:01.123 "claimed": false, 00:22:01.123 "zoned": false, 00:22:01.123 "supported_io_types": { 00:22:01.123 "read": true, 00:22:01.123 "write": true, 00:22:01.123 "unmap": true, 00:22:01.123 "flush": true, 00:22:01.123 "reset": true, 00:22:01.123 "nvme_admin": false, 00:22:01.123 "nvme_io": false, 00:22:01.123 "nvme_io_md": false, 00:22:01.123 "write_zeroes": true, 00:22:01.123 "zcopy": true, 00:22:01.123 "get_zone_info": false, 00:22:01.123 "zone_management": false, 00:22:01.123 "zone_append": false, 00:22:01.123 "compare": false, 00:22:01.123 "compare_and_write": false, 00:22:01.123 "abort": true, 00:22:01.123 "seek_hole": false, 00:22:01.123 "seek_data": false, 00:22:01.123 "copy": true, 00:22:01.123 "nvme_iov_md": false 00:22:01.123 }, 00:22:01.123 "memory_domains": [ 00:22:01.123 { 00:22:01.123 "dma_device_id": "system", 00:22:01.123 "dma_device_type": 1 00:22:01.123 }, 00:22:01.123 { 00:22:01.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.123 "dma_device_type": 2 00:22:01.123 } 00:22:01.123 ], 00:22:01.123 "driver_specific": {} 00:22:01.123 } 00:22:01.123 ] 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:01.383 [2024-07-15 09:48:29.441504] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:01.383 [2024-07-15 09:48:29.441566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:01.383 [2024-07-15 09:48:29.441575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.383 [2024-07-15 09:48:29.442247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:01.383 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.384 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.642 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.642 "name": "Existed_Raid", 00:22:01.642 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:01.642 "strip_size_kb": 64, 00:22:01.642 "state": "configuring", 00:22:01.642 "raid_level": "concat", 00:22:01.642 "superblock": true, 00:22:01.642 "num_base_bdevs": 3, 00:22:01.642 "num_base_bdevs_discovered": 2, 00:22:01.642 "num_base_bdevs_operational": 3, 00:22:01.642 "base_bdevs_list": [ 00:22:01.642 { 00:22:01.642 "name": "BaseBdev1", 00:22:01.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.642 "is_configured": false, 00:22:01.642 "data_offset": 0, 00:22:01.642 "data_size": 0 00:22:01.642 }, 00:22:01.642 { 00:22:01.642 "name": "BaseBdev2", 00:22:01.642 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:01.642 "is_configured": true, 00:22:01.642 "data_offset": 2048, 00:22:01.642 "data_size": 63488 00:22:01.642 }, 00:22:01.642 { 00:22:01.642 "name": "BaseBdev3", 00:22:01.642 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:01.642 "is_configured": true, 00:22:01.642 "data_offset": 2048, 00:22:01.642 "data_size": 63488 00:22:01.642 } 00:22:01.642 ] 00:22:01.642 }' 00:22:01.642 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.642 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.899 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:02.158 [2024-07-15 09:48:30.149514] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.158 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.159 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.417 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.417 "name": "Existed_Raid", 00:22:02.417 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:02.417 "strip_size_kb": 64, 00:22:02.417 "state": "configuring", 00:22:02.417 "raid_level": "concat", 00:22:02.417 "superblock": true, 00:22:02.417 "num_base_bdevs": 3, 00:22:02.417 "num_base_bdevs_discovered": 1, 00:22:02.417 "num_base_bdevs_operational": 3, 00:22:02.417 "base_bdevs_list": [ 00:22:02.417 { 00:22:02.417 "name": "BaseBdev1", 00:22:02.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.417 "is_configured": false, 00:22:02.417 "data_offset": 0, 00:22:02.417 "data_size": 0 00:22:02.417 }, 00:22:02.417 { 00:22:02.417 "name": null, 00:22:02.417 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:02.417 "is_configured": false, 00:22:02.417 "data_offset": 2048, 00:22:02.417 "data_size": 63488 00:22:02.417 }, 00:22:02.417 { 00:22:02.417 "name": "BaseBdev3", 00:22:02.417 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:02.417 "is_configured": true, 00:22:02.417 "data_offset": 2048, 00:22:02.417 "data_size": 63488 00:22:02.417 } 00:22:02.417 ] 00:22:02.417 }' 00:22:02.417 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.417 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.675 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.675 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:02.934 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:02.934 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:03.193 [2024-07-15 09:48:31.085683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.193 BaseBdev1 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:03.193 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:03.453 [ 00:22:03.453 { 00:22:03.453 "name": "BaseBdev1", 00:22:03.453 "aliases": [ 00:22:03.453 "64cf9013-428f-11ef-a0af-c98d8ee52a94" 00:22:03.453 ], 00:22:03.453 "product_name": "Malloc disk", 00:22:03.453 "block_size": 512, 00:22:03.453 "num_blocks": 65536, 00:22:03.453 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:03.453 "assigned_rate_limits": { 00:22:03.453 "rw_ios_per_sec": 0, 00:22:03.453 "rw_mbytes_per_sec": 0, 00:22:03.453 "r_mbytes_per_sec": 0, 00:22:03.453 "w_mbytes_per_sec": 0 00:22:03.453 }, 00:22:03.453 "claimed": true, 00:22:03.453 "claim_type": "exclusive_write", 00:22:03.453 "zoned": false, 00:22:03.453 "supported_io_types": { 00:22:03.453 "read": true, 00:22:03.453 "write": true, 00:22:03.453 "unmap": true, 00:22:03.453 "flush": true, 00:22:03.453 "reset": true, 00:22:03.453 "nvme_admin": false, 00:22:03.453 "nvme_io": false, 00:22:03.453 "nvme_io_md": false, 00:22:03.453 "write_zeroes": true, 00:22:03.453 "zcopy": true, 00:22:03.453 "get_zone_info": false, 00:22:03.453 "zone_management": false, 00:22:03.453 "zone_append": false, 00:22:03.453 "compare": false, 00:22:03.453 "compare_and_write": false, 00:22:03.453 "abort": true, 00:22:03.453 "seek_hole": false, 00:22:03.453 "seek_data": false, 00:22:03.453 "copy": true, 00:22:03.453 "nvme_iov_md": false 00:22:03.453 }, 00:22:03.453 "memory_domains": [ 00:22:03.453 { 00:22:03.453 "dma_device_id": "system", 00:22:03.453 "dma_device_type": 1 00:22:03.453 }, 00:22:03.453 { 00:22:03.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.453 "dma_device_type": 2 00:22:03.453 } 00:22:03.453 ], 00:22:03.453 "driver_specific": {} 00:22:03.453 } 00:22:03.453 ] 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.453 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.712 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.712 "name": "Existed_Raid", 00:22:03.712 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:03.712 "strip_size_kb": 64, 00:22:03.712 "state": "configuring", 00:22:03.712 "raid_level": "concat", 00:22:03.712 "superblock": true, 00:22:03.712 "num_base_bdevs": 3, 00:22:03.712 "num_base_bdevs_discovered": 2, 00:22:03.712 "num_base_bdevs_operational": 3, 00:22:03.712 "base_bdevs_list": [ 00:22:03.712 { 00:22:03.712 "name": "BaseBdev1", 00:22:03.712 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:03.712 "is_configured": true, 00:22:03.712 "data_offset": 2048, 00:22:03.712 "data_size": 63488 00:22:03.712 }, 00:22:03.712 { 00:22:03.712 "name": null, 00:22:03.712 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:03.712 "is_configured": false, 00:22:03.712 "data_offset": 2048, 00:22:03.712 "data_size": 63488 00:22:03.712 }, 00:22:03.712 { 00:22:03.712 "name": "BaseBdev3", 00:22:03.712 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:03.712 "is_configured": true, 00:22:03.712 "data_offset": 2048, 00:22:03.712 "data_size": 63488 00:22:03.712 } 00:22:03.712 ] 00:22:03.712 }' 00:22:03.712 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.712 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.970 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.970 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:04.228 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:04.228 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:04.486 [2024-07-15 09:48:32.409631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.486 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.745 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.745 "name": "Existed_Raid", 00:22:04.745 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:04.745 "strip_size_kb": 64, 00:22:04.745 "state": "configuring", 00:22:04.745 "raid_level": "concat", 00:22:04.745 "superblock": true, 00:22:04.745 "num_base_bdevs": 3, 00:22:04.745 "num_base_bdevs_discovered": 1, 00:22:04.745 "num_base_bdevs_operational": 3, 00:22:04.745 "base_bdevs_list": [ 00:22:04.745 { 00:22:04.745 "name": "BaseBdev1", 00:22:04.745 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:04.745 "is_configured": true, 00:22:04.745 "data_offset": 2048, 00:22:04.745 "data_size": 63488 00:22:04.745 }, 00:22:04.745 { 00:22:04.745 "name": null, 00:22:04.745 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:04.745 "is_configured": false, 00:22:04.745 "data_offset": 2048, 00:22:04.745 "data_size": 63488 00:22:04.745 }, 00:22:04.745 { 00:22:04.745 "name": null, 00:22:04.745 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:04.745 "is_configured": false, 00:22:04.745 "data_offset": 2048, 00:22:04.745 "data_size": 63488 00:22:04.745 } 00:22:04.745 ] 00:22:04.745 }' 00:22:04.745 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.745 09:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.004 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.004 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:05.262 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:05.262 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:05.262 [2024-07-15 09:48:33.353689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.521 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.780 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.780 "name": "Existed_Raid", 00:22:05.780 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:05.780 "strip_size_kb": 64, 00:22:05.780 "state": "configuring", 00:22:05.780 "raid_level": "concat", 00:22:05.780 "superblock": true, 00:22:05.780 "num_base_bdevs": 3, 00:22:05.780 "num_base_bdevs_discovered": 2, 00:22:05.780 "num_base_bdevs_operational": 3, 00:22:05.780 "base_bdevs_list": [ 00:22:05.780 { 00:22:05.780 "name": "BaseBdev1", 00:22:05.780 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:05.780 "is_configured": true, 00:22:05.780 "data_offset": 2048, 00:22:05.780 "data_size": 63488 00:22:05.780 }, 00:22:05.780 { 00:22:05.780 "name": null, 00:22:05.780 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:05.780 "is_configured": false, 00:22:05.780 "data_offset": 2048, 00:22:05.780 "data_size": 63488 00:22:05.780 }, 00:22:05.780 { 00:22:05.780 "name": "BaseBdev3", 00:22:05.780 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:05.780 "is_configured": true, 00:22:05.780 "data_offset": 2048, 00:22:05.780 "data_size": 63488 00:22:05.780 } 00:22:05.780 ] 00:22:05.780 }' 00:22:05.780 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.780 09:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.039 09:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:06.298 [2024-07-15 09:48:34.349744] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.298 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.556 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.556 "name": "Existed_Raid", 00:22:06.556 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:06.556 "strip_size_kb": 64, 00:22:06.556 "state": "configuring", 00:22:06.556 "raid_level": "concat", 00:22:06.556 "superblock": true, 00:22:06.556 "num_base_bdevs": 3, 00:22:06.556 "num_base_bdevs_discovered": 1, 00:22:06.556 "num_base_bdevs_operational": 3, 00:22:06.556 "base_bdevs_list": [ 00:22:06.556 { 00:22:06.556 "name": null, 00:22:06.556 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:06.556 "is_configured": false, 00:22:06.556 "data_offset": 2048, 00:22:06.556 "data_size": 63488 00:22:06.556 }, 00:22:06.556 { 00:22:06.556 "name": null, 00:22:06.556 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:06.556 "is_configured": false, 00:22:06.556 "data_offset": 2048, 00:22:06.556 "data_size": 63488 00:22:06.556 }, 00:22:06.556 { 00:22:06.556 "name": "BaseBdev3", 00:22:06.556 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:06.556 "is_configured": true, 00:22:06.556 "data_offset": 2048, 00:22:06.556 "data_size": 63488 00:22:06.556 } 00:22:06.556 ] 00:22:06.556 }' 00:22:06.556 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.556 09:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.815 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.815 09:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:07.074 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:07.074 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:07.333 [2024-07-15 09:48:35.294285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.333 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.591 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.591 "name": "Existed_Raid", 00:22:07.591 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:07.591 "strip_size_kb": 64, 00:22:07.591 "state": "configuring", 00:22:07.591 "raid_level": "concat", 00:22:07.591 "superblock": true, 00:22:07.591 "num_base_bdevs": 3, 00:22:07.591 "num_base_bdevs_discovered": 2, 00:22:07.591 "num_base_bdevs_operational": 3, 00:22:07.591 "base_bdevs_list": [ 00:22:07.591 { 00:22:07.591 "name": null, 00:22:07.591 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:07.591 "is_configured": false, 00:22:07.591 "data_offset": 2048, 00:22:07.591 "data_size": 63488 00:22:07.591 }, 00:22:07.591 { 00:22:07.591 "name": "BaseBdev2", 00:22:07.591 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:07.591 "is_configured": true, 00:22:07.591 "data_offset": 2048, 00:22:07.591 "data_size": 63488 00:22:07.591 }, 00:22:07.591 { 00:22:07.591 "name": "BaseBdev3", 00:22:07.591 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:07.591 "is_configured": true, 00:22:07.591 "data_offset": 2048, 00:22:07.591 "data_size": 63488 00:22:07.591 } 00:22:07.591 ] 00:22:07.591 }' 00:22:07.591 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.591 09:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.850 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.850 09:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:08.109 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:08.109 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.109 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 64cf9013-428f-11ef-a0af-c98d8ee52a94 00:22:08.368 [2024-07-15 09:48:36.426485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:08.368 [2024-07-15 09:48:36.426534] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33d6d8634a00 00:22:08.368 [2024-07-15 09:48:36.426538] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:08.368 [2024-07-15 09:48:36.426555] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33d6d8697e20 00:22:08.368 [2024-07-15 09:48:36.426590] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33d6d8634a00 00:22:08.368 [2024-07-15 09:48:36.426594] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x33d6d8634a00 00:22:08.368 [2024-07-15 09:48:36.426610] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.368 NewBaseBdev 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:08.368 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.631 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:08.928 [ 00:22:08.928 { 00:22:08.928 "name": "NewBaseBdev", 00:22:08.928 "aliases": [ 00:22:08.928 "64cf9013-428f-11ef-a0af-c98d8ee52a94" 00:22:08.928 ], 00:22:08.928 "product_name": "Malloc disk", 00:22:08.928 "block_size": 512, 00:22:08.928 "num_blocks": 65536, 00:22:08.928 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:08.928 "assigned_rate_limits": { 00:22:08.928 "rw_ios_per_sec": 0, 00:22:08.928 "rw_mbytes_per_sec": 0, 00:22:08.928 "r_mbytes_per_sec": 0, 00:22:08.928 "w_mbytes_per_sec": 0 00:22:08.928 }, 00:22:08.928 "claimed": true, 00:22:08.928 "claim_type": "exclusive_write", 00:22:08.928 "zoned": false, 00:22:08.928 "supported_io_types": { 00:22:08.928 "read": true, 00:22:08.928 "write": true, 00:22:08.928 "unmap": true, 00:22:08.928 "flush": true, 00:22:08.928 "reset": true, 00:22:08.928 "nvme_admin": false, 00:22:08.928 "nvme_io": false, 00:22:08.928 "nvme_io_md": false, 00:22:08.928 "write_zeroes": true, 00:22:08.928 "zcopy": true, 00:22:08.928 "get_zone_info": false, 00:22:08.928 "zone_management": false, 00:22:08.928 "zone_append": false, 00:22:08.928 "compare": false, 00:22:08.928 "compare_and_write": false, 00:22:08.928 "abort": true, 00:22:08.928 "seek_hole": false, 00:22:08.928 "seek_data": false, 00:22:08.928 "copy": true, 00:22:08.928 "nvme_iov_md": false 00:22:08.928 }, 00:22:08.928 "memory_domains": [ 00:22:08.928 { 00:22:08.928 "dma_device_id": "system", 00:22:08.928 "dma_device_type": 1 00:22:08.928 }, 00:22:08.928 { 00:22:08.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.928 "dma_device_type": 2 00:22:08.928 } 00:22:08.928 ], 00:22:08.928 "driver_specific": {} 00:22:08.928 } 00:22:08.928 ] 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.928 09:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.205 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.205 "name": "Existed_Raid", 00:22:09.205 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:09.205 "strip_size_kb": 64, 00:22:09.205 "state": "online", 00:22:09.205 "raid_level": "concat", 00:22:09.205 "superblock": true, 00:22:09.205 "num_base_bdevs": 3, 00:22:09.205 "num_base_bdevs_discovered": 3, 00:22:09.205 "num_base_bdevs_operational": 3, 00:22:09.205 "base_bdevs_list": [ 00:22:09.205 { 00:22:09.205 "name": "NewBaseBdev", 00:22:09.205 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:09.205 "is_configured": true, 00:22:09.205 "data_offset": 2048, 00:22:09.205 "data_size": 63488 00:22:09.205 }, 00:22:09.205 { 00:22:09.205 "name": "BaseBdev2", 00:22:09.205 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:09.205 "is_configured": true, 00:22:09.205 "data_offset": 2048, 00:22:09.205 "data_size": 63488 00:22:09.205 }, 00:22:09.205 { 00:22:09.205 "name": "BaseBdev3", 00:22:09.205 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:09.205 "is_configured": true, 00:22:09.205 "data_offset": 2048, 00:22:09.205 "data_size": 63488 00:22:09.205 } 00:22:09.205 ] 00:22:09.205 }' 00:22:09.205 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.205 09:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:09.464 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:09.723 [2024-07-15 09:48:37.622442] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.723 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:09.723 "name": "Existed_Raid", 00:22:09.723 "aliases": [ 00:22:09.723 "63d4b27a-428f-11ef-a0af-c98d8ee52a94" 00:22:09.723 ], 00:22:09.723 "product_name": "Raid Volume", 00:22:09.723 "block_size": 512, 00:22:09.723 "num_blocks": 190464, 00:22:09.723 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:09.723 "assigned_rate_limits": { 00:22:09.723 "rw_ios_per_sec": 0, 00:22:09.723 "rw_mbytes_per_sec": 0, 00:22:09.723 "r_mbytes_per_sec": 0, 00:22:09.723 "w_mbytes_per_sec": 0 00:22:09.723 }, 00:22:09.723 "claimed": false, 00:22:09.723 "zoned": false, 00:22:09.723 "supported_io_types": { 00:22:09.723 "read": true, 00:22:09.723 "write": true, 00:22:09.723 "unmap": true, 00:22:09.723 "flush": true, 00:22:09.723 "reset": true, 00:22:09.723 "nvme_admin": false, 00:22:09.723 "nvme_io": false, 00:22:09.723 "nvme_io_md": false, 00:22:09.723 "write_zeroes": true, 00:22:09.723 "zcopy": false, 00:22:09.723 "get_zone_info": false, 00:22:09.723 "zone_management": false, 00:22:09.723 "zone_append": false, 00:22:09.723 "compare": false, 00:22:09.723 "compare_and_write": false, 00:22:09.723 "abort": false, 00:22:09.723 "seek_hole": false, 00:22:09.723 "seek_data": false, 00:22:09.723 "copy": false, 00:22:09.723 "nvme_iov_md": false 00:22:09.723 }, 00:22:09.723 "memory_domains": [ 00:22:09.723 { 00:22:09.723 "dma_device_id": "system", 00:22:09.723 "dma_device_type": 1 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.723 "dma_device_type": 2 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "dma_device_id": "system", 00:22:09.723 "dma_device_type": 1 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.723 "dma_device_type": 2 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "dma_device_id": "system", 00:22:09.723 "dma_device_type": 1 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.723 "dma_device_type": 2 00:22:09.723 } 00:22:09.723 ], 00:22:09.723 "driver_specific": { 00:22:09.723 "raid": { 00:22:09.723 "uuid": "63d4b27a-428f-11ef-a0af-c98d8ee52a94", 00:22:09.723 "strip_size_kb": 64, 00:22:09.723 "state": "online", 00:22:09.723 "raid_level": "concat", 00:22:09.723 "superblock": true, 00:22:09.723 "num_base_bdevs": 3, 00:22:09.723 "num_base_bdevs_discovered": 3, 00:22:09.723 "num_base_bdevs_operational": 3, 00:22:09.723 "base_bdevs_list": [ 00:22:09.723 { 00:22:09.723 "name": "NewBaseBdev", 00:22:09.723 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:09.723 "is_configured": true, 00:22:09.723 "data_offset": 2048, 00:22:09.723 "data_size": 63488 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "name": "BaseBdev2", 00:22:09.723 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:09.723 "is_configured": true, 00:22:09.723 "data_offset": 2048, 00:22:09.723 "data_size": 63488 00:22:09.723 }, 00:22:09.723 { 00:22:09.723 "name": "BaseBdev3", 00:22:09.723 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:09.723 "is_configured": true, 00:22:09.723 "data_offset": 2048, 00:22:09.723 "data_size": 63488 00:22:09.723 } 00:22:09.723 ] 00:22:09.723 } 00:22:09.723 } 00:22:09.723 }' 00:22:09.723 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.723 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:09.723 BaseBdev2 00:22:09.723 BaseBdev3' 00:22:09.723 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.723 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:09.723 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.983 "name": "NewBaseBdev", 00:22:09.983 "aliases": [ 00:22:09.983 "64cf9013-428f-11ef-a0af-c98d8ee52a94" 00:22:09.983 ], 00:22:09.983 "product_name": "Malloc disk", 00:22:09.983 "block_size": 512, 00:22:09.983 "num_blocks": 65536, 00:22:09.983 "uuid": "64cf9013-428f-11ef-a0af-c98d8ee52a94", 00:22:09.983 "assigned_rate_limits": { 00:22:09.983 "rw_ios_per_sec": 0, 00:22:09.983 "rw_mbytes_per_sec": 0, 00:22:09.983 "r_mbytes_per_sec": 0, 00:22:09.983 "w_mbytes_per_sec": 0 00:22:09.983 }, 00:22:09.983 "claimed": true, 00:22:09.983 "claim_type": "exclusive_write", 00:22:09.983 "zoned": false, 00:22:09.983 "supported_io_types": { 00:22:09.983 "read": true, 00:22:09.983 "write": true, 00:22:09.983 "unmap": true, 00:22:09.983 "flush": true, 00:22:09.983 "reset": true, 00:22:09.983 "nvme_admin": false, 00:22:09.983 "nvme_io": false, 00:22:09.983 "nvme_io_md": false, 00:22:09.983 "write_zeroes": true, 00:22:09.983 "zcopy": true, 00:22:09.983 "get_zone_info": false, 00:22:09.983 "zone_management": false, 00:22:09.983 "zone_append": false, 00:22:09.983 "compare": false, 00:22:09.983 "compare_and_write": false, 00:22:09.983 "abort": true, 00:22:09.983 "seek_hole": false, 00:22:09.983 "seek_data": false, 00:22:09.983 "copy": true, 00:22:09.983 "nvme_iov_md": false 00:22:09.983 }, 00:22:09.983 "memory_domains": [ 00:22:09.983 { 00:22:09.983 "dma_device_id": "system", 00:22:09.983 "dma_device_type": 1 00:22:09.983 }, 00:22:09.983 { 00:22:09.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.983 "dma_device_type": 2 00:22:09.983 } 00:22:09.983 ], 00:22:09.983 "driver_specific": {} 00:22:09.983 }' 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:09.983 09:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.243 "name": "BaseBdev2", 00:22:10.243 "aliases": [ 00:22:10.243 "631508cc-428f-11ef-a0af-c98d8ee52a94" 00:22:10.243 ], 00:22:10.243 "product_name": "Malloc disk", 00:22:10.243 "block_size": 512, 00:22:10.243 "num_blocks": 65536, 00:22:10.243 "uuid": "631508cc-428f-11ef-a0af-c98d8ee52a94", 00:22:10.243 "assigned_rate_limits": { 00:22:10.243 "rw_ios_per_sec": 0, 00:22:10.243 "rw_mbytes_per_sec": 0, 00:22:10.243 "r_mbytes_per_sec": 0, 00:22:10.243 "w_mbytes_per_sec": 0 00:22:10.243 }, 00:22:10.243 "claimed": true, 00:22:10.243 "claim_type": "exclusive_write", 00:22:10.243 "zoned": false, 00:22:10.243 "supported_io_types": { 00:22:10.243 "read": true, 00:22:10.243 "write": true, 00:22:10.243 "unmap": true, 00:22:10.243 "flush": true, 00:22:10.243 "reset": true, 00:22:10.243 "nvme_admin": false, 00:22:10.243 "nvme_io": false, 00:22:10.243 "nvme_io_md": false, 00:22:10.243 "write_zeroes": true, 00:22:10.243 "zcopy": true, 00:22:10.243 "get_zone_info": false, 00:22:10.243 "zone_management": false, 00:22:10.243 "zone_append": false, 00:22:10.243 "compare": false, 00:22:10.243 "compare_and_write": false, 00:22:10.243 "abort": true, 00:22:10.243 "seek_hole": false, 00:22:10.243 "seek_data": false, 00:22:10.243 "copy": true, 00:22:10.243 "nvme_iov_md": false 00:22:10.243 }, 00:22:10.243 "memory_domains": [ 00:22:10.243 { 00:22:10.243 "dma_device_id": "system", 00:22:10.243 "dma_device_type": 1 00:22:10.243 }, 00:22:10.243 { 00:22:10.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.243 "dma_device_type": 2 00:22:10.243 } 00:22:10.243 ], 00:22:10.243 "driver_specific": {} 00:22:10.243 }' 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.243 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:10.502 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.502 "name": "BaseBdev3", 00:22:10.502 "aliases": [ 00:22:10.502 "637440e6-428f-11ef-a0af-c98d8ee52a94" 00:22:10.502 ], 00:22:10.502 "product_name": "Malloc disk", 00:22:10.502 "block_size": 512, 00:22:10.502 "num_blocks": 65536, 00:22:10.502 "uuid": "637440e6-428f-11ef-a0af-c98d8ee52a94", 00:22:10.502 "assigned_rate_limits": { 00:22:10.502 "rw_ios_per_sec": 0, 00:22:10.502 "rw_mbytes_per_sec": 0, 00:22:10.502 "r_mbytes_per_sec": 0, 00:22:10.502 "w_mbytes_per_sec": 0 00:22:10.502 }, 00:22:10.502 "claimed": true, 00:22:10.502 "claim_type": "exclusive_write", 00:22:10.502 "zoned": false, 00:22:10.502 "supported_io_types": { 00:22:10.502 "read": true, 00:22:10.502 "write": true, 00:22:10.502 "unmap": true, 00:22:10.502 "flush": true, 00:22:10.502 "reset": true, 00:22:10.502 "nvme_admin": false, 00:22:10.502 "nvme_io": false, 00:22:10.502 "nvme_io_md": false, 00:22:10.502 "write_zeroes": true, 00:22:10.502 "zcopy": true, 00:22:10.502 "get_zone_info": false, 00:22:10.502 "zone_management": false, 00:22:10.502 "zone_append": false, 00:22:10.502 "compare": false, 00:22:10.502 "compare_and_write": false, 00:22:10.502 "abort": true, 00:22:10.502 "seek_hole": false, 00:22:10.502 "seek_data": false, 00:22:10.502 "copy": true, 00:22:10.502 "nvme_iov_md": false 00:22:10.502 }, 00:22:10.502 "memory_domains": [ 00:22:10.502 { 00:22:10.502 "dma_device_id": "system", 00:22:10.502 "dma_device_type": 1 00:22:10.502 }, 00:22:10.502 { 00:22:10.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.502 "dma_device_type": 2 00:22:10.502 } 00:22:10.502 ], 00:22:10.502 "driver_specific": {} 00:22:10.502 }' 00:22:10.502 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.502 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.502 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.502 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.502 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.503 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:10.821 [2024-07-15 09:48:38.710494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:10.821 [2024-07-15 09:48:38.710522] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.821 [2024-07-15 09:48:38.710542] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.821 [2024-07-15 09:48:38.710557] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.821 [2024-07-15 09:48:38.710561] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33d6d8634a00 name Existed_Raid, state offline 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54673 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54673 ']' 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54673 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54673 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:22:10.821 killing process with pid 54673 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54673' 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54673 00:22:10.821 [2024-07-15 09:48:38.741231] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:10.821 09:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54673 00:22:10.821 [2024-07-15 09:48:38.767467] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:11.080 09:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:11.080 00:22:11.080 real 0m20.555s 00:22:11.080 user 0m36.014s 00:22:11.080 sys 0m4.308s 00:22:11.080 09:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.080 09:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.080 ************************************ 00:22:11.080 END TEST raid_state_function_test_sb 00:22:11.080 ************************************ 00:22:11.080 09:48:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:11.080 09:48:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:22:11.080 09:48:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:11.080 09:48:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.080 09:48:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:11.080 ************************************ 00:22:11.080 START TEST raid_superblock_test 00:22:11.080 ************************************ 00:22:11.080 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55389 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55389 /var/tmp/spdk-raid.sock 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55389 ']' 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.081 09:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.081 [2024-07-15 09:48:39.093208] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:11.081 [2024-07-15 09:48:39.093464] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:22:12.019 EAL: TSC is not safe to use in SMP mode 00:22:12.019 EAL: TSC is not invariant 00:22:12.019 [2024-07-15 09:48:39.794887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.019 [2024-07-15 09:48:39.913543] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:12.019 [2024-07-15 09:48:39.915980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.019 [2024-07-15 09:48:39.916688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.019 [2024-07-15 09:48:39.916700] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.019 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:12.278 malloc1 00:22:12.278 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:12.537 [2024-07-15 09:48:40.435495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:12.537 [2024-07-15 09:48:40.435559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.537 [2024-07-15 09:48:40.435569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3834780 00:22:12.537 [2024-07-15 09:48:40.435576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.537 [2024-07-15 09:48:40.436631] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.537 [2024-07-15 09:48:40.436681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:12.537 pt1 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.537 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:12.537 malloc2 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:12.797 [2024-07-15 09:48:40.847515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:12.797 [2024-07-15 09:48:40.847573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.797 [2024-07-15 09:48:40.847582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3834c80 00:22:12.797 [2024-07-15 09:48:40.847589] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.797 [2024-07-15 09:48:40.848319] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.797 [2024-07-15 09:48:40.848352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:12.797 pt2 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.797 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:13.086 malloc3 00:22:13.086 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:13.345 [2024-07-15 09:48:41.283548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:13.345 [2024-07-15 09:48:41.283615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.345 [2024-07-15 09:48:41.283642] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3835180 00:22:13.345 [2024-07-15 09:48:41.283649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.345 [2024-07-15 09:48:41.284421] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.345 [2024-07-15 09:48:41.284455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:13.345 pt3 00:22:13.345 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:13.345 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:13.345 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:13.604 [2024-07-15 09:48:41.511566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.604 [2024-07-15 09:48:41.512245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.604 [2024-07-15 09:48:41.512271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:13.604 [2024-07-15 09:48:41.512319] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x195dd3835400 00:22:13.604 [2024-07-15 09:48:41.512324] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:13.604 [2024-07-15 09:48:41.512360] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x195dd3897e20 00:22:13.604 [2024-07-15 09:48:41.512434] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x195dd3835400 00:22:13.604 [2024-07-15 09:48:41.512438] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x195dd3835400 00:22:13.604 [2024-07-15 09:48:41.512461] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.604 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.863 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.863 "name": "raid_bdev1", 00:22:13.863 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:13.863 "strip_size_kb": 64, 00:22:13.863 "state": "online", 00:22:13.863 "raid_level": "concat", 00:22:13.863 "superblock": true, 00:22:13.863 "num_base_bdevs": 3, 00:22:13.863 "num_base_bdevs_discovered": 3, 00:22:13.863 "num_base_bdevs_operational": 3, 00:22:13.863 "base_bdevs_list": [ 00:22:13.863 { 00:22:13.863 "name": "pt1", 00:22:13.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.863 "is_configured": true, 00:22:13.863 "data_offset": 2048, 00:22:13.863 "data_size": 63488 00:22:13.863 }, 00:22:13.863 { 00:22:13.863 "name": "pt2", 00:22:13.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.863 "is_configured": true, 00:22:13.863 "data_offset": 2048, 00:22:13.863 "data_size": 63488 00:22:13.863 }, 00:22:13.863 { 00:22:13.863 "name": "pt3", 00:22:13.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.863 "is_configured": true, 00:22:13.863 "data_offset": 2048, 00:22:13.863 "data_size": 63488 00:22:13.863 } 00:22:13.863 ] 00:22:13.863 }' 00:22:13.863 09:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.863 09:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:14.122 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:14.381 [2024-07-15 09:48:42.383677] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.381 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:14.381 "name": "raid_bdev1", 00:22:14.381 "aliases": [ 00:22:14.381 "6b067130-428f-11ef-a0af-c98d8ee52a94" 00:22:14.381 ], 00:22:14.381 "product_name": "Raid Volume", 00:22:14.381 "block_size": 512, 00:22:14.381 "num_blocks": 190464, 00:22:14.381 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:14.381 "assigned_rate_limits": { 00:22:14.381 "rw_ios_per_sec": 0, 00:22:14.381 "rw_mbytes_per_sec": 0, 00:22:14.381 "r_mbytes_per_sec": 0, 00:22:14.381 "w_mbytes_per_sec": 0 00:22:14.381 }, 00:22:14.381 "claimed": false, 00:22:14.381 "zoned": false, 00:22:14.381 "supported_io_types": { 00:22:14.381 "read": true, 00:22:14.381 "write": true, 00:22:14.381 "unmap": true, 00:22:14.381 "flush": true, 00:22:14.381 "reset": true, 00:22:14.381 "nvme_admin": false, 00:22:14.381 "nvme_io": false, 00:22:14.381 "nvme_io_md": false, 00:22:14.381 "write_zeroes": true, 00:22:14.381 "zcopy": false, 00:22:14.381 "get_zone_info": false, 00:22:14.381 "zone_management": false, 00:22:14.381 "zone_append": false, 00:22:14.381 "compare": false, 00:22:14.381 "compare_and_write": false, 00:22:14.381 "abort": false, 00:22:14.381 "seek_hole": false, 00:22:14.381 "seek_data": false, 00:22:14.381 "copy": false, 00:22:14.381 "nvme_iov_md": false 00:22:14.381 }, 00:22:14.381 "memory_domains": [ 00:22:14.381 { 00:22:14.381 "dma_device_id": "system", 00:22:14.381 "dma_device_type": 1 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.381 "dma_device_type": 2 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "dma_device_id": "system", 00:22:14.381 "dma_device_type": 1 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.381 "dma_device_type": 2 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "dma_device_id": "system", 00:22:14.381 "dma_device_type": 1 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.381 "dma_device_type": 2 00:22:14.381 } 00:22:14.381 ], 00:22:14.381 "driver_specific": { 00:22:14.381 "raid": { 00:22:14.381 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:14.381 "strip_size_kb": 64, 00:22:14.381 "state": "online", 00:22:14.381 "raid_level": "concat", 00:22:14.381 "superblock": true, 00:22:14.381 "num_base_bdevs": 3, 00:22:14.381 "num_base_bdevs_discovered": 3, 00:22:14.381 "num_base_bdevs_operational": 3, 00:22:14.381 "base_bdevs_list": [ 00:22:14.381 { 00:22:14.381 "name": "pt1", 00:22:14.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.381 "is_configured": true, 00:22:14.381 "data_offset": 2048, 00:22:14.381 "data_size": 63488 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "name": "pt2", 00:22:14.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.381 "is_configured": true, 00:22:14.381 "data_offset": 2048, 00:22:14.381 "data_size": 63488 00:22:14.381 }, 00:22:14.381 { 00:22:14.381 "name": "pt3", 00:22:14.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.381 "is_configured": true, 00:22:14.381 "data_offset": 2048, 00:22:14.381 "data_size": 63488 00:22:14.381 } 00:22:14.381 ] 00:22:14.381 } 00:22:14.381 } 00:22:14.381 }' 00:22:14.381 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.381 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:14.381 pt2 00:22:14.381 pt3' 00:22:14.381 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.382 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:14.382 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.640 "name": "pt1", 00:22:14.640 "aliases": [ 00:22:14.640 "00000000-0000-0000-0000-000000000001" 00:22:14.640 ], 00:22:14.640 "product_name": "passthru", 00:22:14.640 "block_size": 512, 00:22:14.640 "num_blocks": 65536, 00:22:14.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.640 "assigned_rate_limits": { 00:22:14.640 "rw_ios_per_sec": 0, 00:22:14.640 "rw_mbytes_per_sec": 0, 00:22:14.640 "r_mbytes_per_sec": 0, 00:22:14.640 "w_mbytes_per_sec": 0 00:22:14.640 }, 00:22:14.640 "claimed": true, 00:22:14.640 "claim_type": "exclusive_write", 00:22:14.640 "zoned": false, 00:22:14.640 "supported_io_types": { 00:22:14.640 "read": true, 00:22:14.640 "write": true, 00:22:14.640 "unmap": true, 00:22:14.640 "flush": true, 00:22:14.640 "reset": true, 00:22:14.640 "nvme_admin": false, 00:22:14.640 "nvme_io": false, 00:22:14.640 "nvme_io_md": false, 00:22:14.640 "write_zeroes": true, 00:22:14.640 "zcopy": true, 00:22:14.640 "get_zone_info": false, 00:22:14.640 "zone_management": false, 00:22:14.640 "zone_append": false, 00:22:14.640 "compare": false, 00:22:14.640 "compare_and_write": false, 00:22:14.640 "abort": true, 00:22:14.640 "seek_hole": false, 00:22:14.640 "seek_data": false, 00:22:14.640 "copy": true, 00:22:14.640 "nvme_iov_md": false 00:22:14.640 }, 00:22:14.640 "memory_domains": [ 00:22:14.640 { 00:22:14.640 "dma_device_id": "system", 00:22:14.640 "dma_device_type": 1 00:22:14.640 }, 00:22:14.640 { 00:22:14.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.640 "dma_device_type": 2 00:22:14.640 } 00:22:14.640 ], 00:22:14.640 "driver_specific": { 00:22:14.640 "passthru": { 00:22:14.640 "name": "pt1", 00:22:14.640 "base_bdev_name": "malloc1" 00:22:14.640 } 00:22:14.640 } 00:22:14.640 }' 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:14.640 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.898 "name": "pt2", 00:22:14.898 "aliases": [ 00:22:14.898 "00000000-0000-0000-0000-000000000002" 00:22:14.898 ], 00:22:14.898 "product_name": "passthru", 00:22:14.898 "block_size": 512, 00:22:14.898 "num_blocks": 65536, 00:22:14.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.898 "assigned_rate_limits": { 00:22:14.898 "rw_ios_per_sec": 0, 00:22:14.898 "rw_mbytes_per_sec": 0, 00:22:14.898 "r_mbytes_per_sec": 0, 00:22:14.898 "w_mbytes_per_sec": 0 00:22:14.898 }, 00:22:14.898 "claimed": true, 00:22:14.898 "claim_type": "exclusive_write", 00:22:14.898 "zoned": false, 00:22:14.898 "supported_io_types": { 00:22:14.898 "read": true, 00:22:14.898 "write": true, 00:22:14.898 "unmap": true, 00:22:14.898 "flush": true, 00:22:14.898 "reset": true, 00:22:14.898 "nvme_admin": false, 00:22:14.898 "nvme_io": false, 00:22:14.898 "nvme_io_md": false, 00:22:14.898 "write_zeroes": true, 00:22:14.898 "zcopy": true, 00:22:14.898 "get_zone_info": false, 00:22:14.898 "zone_management": false, 00:22:14.898 "zone_append": false, 00:22:14.898 "compare": false, 00:22:14.898 "compare_and_write": false, 00:22:14.898 "abort": true, 00:22:14.898 "seek_hole": false, 00:22:14.898 "seek_data": false, 00:22:14.898 "copy": true, 00:22:14.898 "nvme_iov_md": false 00:22:14.898 }, 00:22:14.898 "memory_domains": [ 00:22:14.898 { 00:22:14.898 "dma_device_id": "system", 00:22:14.898 "dma_device_type": 1 00:22:14.898 }, 00:22:14.898 { 00:22:14.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.898 "dma_device_type": 2 00:22:14.898 } 00:22:14.898 ], 00:22:14.898 "driver_specific": { 00:22:14.898 "passthru": { 00:22:14.898 "name": "pt2", 00:22:14.898 "base_bdev_name": "malloc2" 00:22:14.898 } 00:22:14.898 } 00:22:14.898 }' 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.898 09:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:15.156 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.414 "name": "pt3", 00:22:15.414 "aliases": [ 00:22:15.414 "00000000-0000-0000-0000-000000000003" 00:22:15.414 ], 00:22:15.414 "product_name": "passthru", 00:22:15.414 "block_size": 512, 00:22:15.414 "num_blocks": 65536, 00:22:15.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.414 "assigned_rate_limits": { 00:22:15.414 "rw_ios_per_sec": 0, 00:22:15.414 "rw_mbytes_per_sec": 0, 00:22:15.414 "r_mbytes_per_sec": 0, 00:22:15.414 "w_mbytes_per_sec": 0 00:22:15.414 }, 00:22:15.414 "claimed": true, 00:22:15.414 "claim_type": "exclusive_write", 00:22:15.414 "zoned": false, 00:22:15.414 "supported_io_types": { 00:22:15.414 "read": true, 00:22:15.414 "write": true, 00:22:15.414 "unmap": true, 00:22:15.414 "flush": true, 00:22:15.414 "reset": true, 00:22:15.414 "nvme_admin": false, 00:22:15.414 "nvme_io": false, 00:22:15.414 "nvme_io_md": false, 00:22:15.414 "write_zeroes": true, 00:22:15.414 "zcopy": true, 00:22:15.414 "get_zone_info": false, 00:22:15.414 "zone_management": false, 00:22:15.414 "zone_append": false, 00:22:15.414 "compare": false, 00:22:15.414 "compare_and_write": false, 00:22:15.414 "abort": true, 00:22:15.414 "seek_hole": false, 00:22:15.414 "seek_data": false, 00:22:15.414 "copy": true, 00:22:15.414 "nvme_iov_md": false 00:22:15.414 }, 00:22:15.414 "memory_domains": [ 00:22:15.414 { 00:22:15.414 "dma_device_id": "system", 00:22:15.414 "dma_device_type": 1 00:22:15.414 }, 00:22:15.414 { 00:22:15.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.414 "dma_device_type": 2 00:22:15.414 } 00:22:15.414 ], 00:22:15.414 "driver_specific": { 00:22:15.414 "passthru": { 00:22:15.414 "name": "pt3", 00:22:15.414 "base_bdev_name": "malloc3" 00:22:15.414 } 00:22:15.414 } 00:22:15.414 }' 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:15.414 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:15.672 [2024-07-15 09:48:43.671767] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.672 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6b067130-428f-11ef-a0af-c98d8ee52a94 00:22:15.672 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6b067130-428f-11ef-a0af-c98d8ee52a94 ']' 00:22:15.672 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.930 [2024-07-15 09:48:43.919714] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.930 [2024-07-15 09:48:43.919747] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.930 [2024-07-15 09:48:43.919776] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.930 [2024-07-15 09:48:43.919794] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.930 [2024-07-15 09:48:43.919798] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x195dd3835400 name raid_bdev1, state offline 00:22:15.930 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:15.930 09:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.218 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:16.218 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:16.218 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.218 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:16.479 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.479 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:16.738 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.738 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:16.996 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:16.996 09:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:17.254 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:17.254 [2024-07-15 09:48:45.331838] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:17.254 [2024-07-15 09:48:45.332555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:17.254 [2024-07-15 09:48:45.332572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:17.254 [2024-07-15 09:48:45.332587] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:17.254 [2024-07-15 09:48:45.332632] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:17.254 [2024-07-15 09:48:45.332641] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:17.254 [2024-07-15 09:48:45.332649] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.254 [2024-07-15 09:48:45.332654] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x195dd3835180 name raid_bdev1, state configuring 00:22:17.254 request: 00:22:17.254 { 00:22:17.254 "name": "raid_bdev1", 00:22:17.254 "raid_level": "concat", 00:22:17.254 "base_bdevs": [ 00:22:17.254 "malloc1", 00:22:17.254 "malloc2", 00:22:17.254 "malloc3" 00:22:17.254 ], 00:22:17.254 "strip_size_kb": 64, 00:22:17.254 "superblock": false, 00:22:17.254 "method": "bdev_raid_create", 00:22:17.254 "req_id": 1 00:22:17.254 } 00:22:17.254 Got JSON-RPC error response 00:22:17.254 response: 00:22:17.254 { 00:22:17.254 "code": -17, 00:22:17.254 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:17.254 } 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:17.512 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.770 [2024-07-15 09:48:45.815888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.771 [2024-07-15 09:48:45.815968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.771 [2024-07-15 09:48:45.815982] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3834c80 00:22:17.771 [2024-07-15 09:48:45.815990] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.771 [2024-07-15 09:48:45.816763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.771 [2024-07-15 09:48:45.816793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.771 [2024-07-15 09:48:45.816818] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.771 [2024-07-15 09:48:45.816831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.771 pt1 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.771 09:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.029 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.029 "name": "raid_bdev1", 00:22:18.029 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:18.029 "strip_size_kb": 64, 00:22:18.029 "state": "configuring", 00:22:18.029 "raid_level": "concat", 00:22:18.029 "superblock": true, 00:22:18.029 "num_base_bdevs": 3, 00:22:18.029 "num_base_bdevs_discovered": 1, 00:22:18.029 "num_base_bdevs_operational": 3, 00:22:18.029 "base_bdevs_list": [ 00:22:18.029 { 00:22:18.029 "name": "pt1", 00:22:18.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.029 "is_configured": true, 00:22:18.029 "data_offset": 2048, 00:22:18.029 "data_size": 63488 00:22:18.029 }, 00:22:18.029 { 00:22:18.029 "name": null, 00:22:18.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.029 "is_configured": false, 00:22:18.029 "data_offset": 2048, 00:22:18.029 "data_size": 63488 00:22:18.029 }, 00:22:18.029 { 00:22:18.029 "name": null, 00:22:18.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.029 "is_configured": false, 00:22:18.029 "data_offset": 2048, 00:22:18.029 "data_size": 63488 00:22:18.029 } 00:22:18.029 ] 00:22:18.029 }' 00:22:18.029 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.029 09:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.288 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:22:18.288 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.546 [2024-07-15 09:48:46.643932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.546 [2024-07-15 09:48:46.644006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.546 [2024-07-15 09:48:46.644018] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3835680 00:22:18.547 [2024-07-15 09:48:46.644025] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.547 [2024-07-15 09:48:46.644161] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.547 [2024-07-15 09:48:46.644168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.547 [2024-07-15 09:48:46.644192] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.547 [2024-07-15 09:48:46.644215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.547 pt2 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:18.824 [2024-07-15 09:48:46.855919] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.824 09:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.130 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.130 "name": "raid_bdev1", 00:22:19.130 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:19.130 "strip_size_kb": 64, 00:22:19.130 "state": "configuring", 00:22:19.130 "raid_level": "concat", 00:22:19.130 "superblock": true, 00:22:19.130 "num_base_bdevs": 3, 00:22:19.130 "num_base_bdevs_discovered": 1, 00:22:19.130 "num_base_bdevs_operational": 3, 00:22:19.130 "base_bdevs_list": [ 00:22:19.130 { 00:22:19.130 "name": "pt1", 00:22:19.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.130 "is_configured": true, 00:22:19.130 "data_offset": 2048, 00:22:19.130 "data_size": 63488 00:22:19.130 }, 00:22:19.130 { 00:22:19.130 "name": null, 00:22:19.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.130 "is_configured": false, 00:22:19.130 "data_offset": 2048, 00:22:19.130 "data_size": 63488 00:22:19.130 }, 00:22:19.130 { 00:22:19.130 "name": null, 00:22:19.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.130 "is_configured": false, 00:22:19.130 "data_offset": 2048, 00:22:19.130 "data_size": 63488 00:22:19.130 } 00:22:19.130 ] 00:22:19.130 }' 00:22:19.130 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.130 09:48:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.389 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:19.389 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:19.389 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:19.648 [2024-07-15 09:48:47.607982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:19.648 [2024-07-15 09:48:47.608036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.648 [2024-07-15 09:48:47.608045] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3835680 00:22:19.648 [2024-07-15 09:48:47.608052] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.648 [2024-07-15 09:48:47.608148] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.648 [2024-07-15 09:48:47.608156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:19.648 [2024-07-15 09:48:47.608173] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:19.648 [2024-07-15 09:48:47.608180] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:19.648 pt2 00:22:19.648 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:19.648 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:19.648 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:19.907 [2024-07-15 09:48:47.820012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:19.907 [2024-07-15 09:48:47.820056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.907 [2024-07-15 09:48:47.820063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195dd3835400 00:22:19.907 [2024-07-15 09:48:47.820069] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.907 [2024-07-15 09:48:47.820151] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.907 [2024-07-15 09:48:47.820159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:19.907 [2024-07-15 09:48:47.820171] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:19.907 [2024-07-15 09:48:47.820177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:19.907 [2024-07-15 09:48:47.820197] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x195dd3834780 00:22:19.907 [2024-07-15 09:48:47.820201] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:19.907 [2024-07-15 09:48:47.820218] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x195dd3897e20 00:22:19.907 [2024-07-15 09:48:47.820261] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x195dd3834780 00:22:19.907 [2024-07-15 09:48:47.820265] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x195dd3834780 00:22:19.907 [2024-07-15 09:48:47.820281] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.907 pt3 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.907 09:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.166 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.166 "name": "raid_bdev1", 00:22:20.166 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:20.166 "strip_size_kb": 64, 00:22:20.166 "state": "online", 00:22:20.166 "raid_level": "concat", 00:22:20.166 "superblock": true, 00:22:20.166 "num_base_bdevs": 3, 00:22:20.166 "num_base_bdevs_discovered": 3, 00:22:20.166 "num_base_bdevs_operational": 3, 00:22:20.166 "base_bdevs_list": [ 00:22:20.166 { 00:22:20.166 "name": "pt1", 00:22:20.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:20.166 "is_configured": true, 00:22:20.166 "data_offset": 2048, 00:22:20.166 "data_size": 63488 00:22:20.166 }, 00:22:20.166 { 00:22:20.166 "name": "pt2", 00:22:20.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.166 "is_configured": true, 00:22:20.166 "data_offset": 2048, 00:22:20.166 "data_size": 63488 00:22:20.166 }, 00:22:20.166 { 00:22:20.166 "name": "pt3", 00:22:20.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.166 "is_configured": true, 00:22:20.166 "data_offset": 2048, 00:22:20.166 "data_size": 63488 00:22:20.166 } 00:22:20.166 ] 00:22:20.166 }' 00:22:20.166 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.166 09:48:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:20.425 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:20.425 [2024-07-15 09:48:48.524080] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:20.685 "name": "raid_bdev1", 00:22:20.685 "aliases": [ 00:22:20.685 "6b067130-428f-11ef-a0af-c98d8ee52a94" 00:22:20.685 ], 00:22:20.685 "product_name": "Raid Volume", 00:22:20.685 "block_size": 512, 00:22:20.685 "num_blocks": 190464, 00:22:20.685 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:20.685 "assigned_rate_limits": { 00:22:20.685 "rw_ios_per_sec": 0, 00:22:20.685 "rw_mbytes_per_sec": 0, 00:22:20.685 "r_mbytes_per_sec": 0, 00:22:20.685 "w_mbytes_per_sec": 0 00:22:20.685 }, 00:22:20.685 "claimed": false, 00:22:20.685 "zoned": false, 00:22:20.685 "supported_io_types": { 00:22:20.685 "read": true, 00:22:20.685 "write": true, 00:22:20.685 "unmap": true, 00:22:20.685 "flush": true, 00:22:20.685 "reset": true, 00:22:20.685 "nvme_admin": false, 00:22:20.685 "nvme_io": false, 00:22:20.685 "nvme_io_md": false, 00:22:20.685 "write_zeroes": true, 00:22:20.685 "zcopy": false, 00:22:20.685 "get_zone_info": false, 00:22:20.685 "zone_management": false, 00:22:20.685 "zone_append": false, 00:22:20.685 "compare": false, 00:22:20.685 "compare_and_write": false, 00:22:20.685 "abort": false, 00:22:20.685 "seek_hole": false, 00:22:20.685 "seek_data": false, 00:22:20.685 "copy": false, 00:22:20.685 "nvme_iov_md": false 00:22:20.685 }, 00:22:20.685 "memory_domains": [ 00:22:20.685 { 00:22:20.685 "dma_device_id": "system", 00:22:20.685 "dma_device_type": 1 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.685 "dma_device_type": 2 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "dma_device_id": "system", 00:22:20.685 "dma_device_type": 1 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.685 "dma_device_type": 2 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "dma_device_id": "system", 00:22:20.685 "dma_device_type": 1 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.685 "dma_device_type": 2 00:22:20.685 } 00:22:20.685 ], 00:22:20.685 "driver_specific": { 00:22:20.685 "raid": { 00:22:20.685 "uuid": "6b067130-428f-11ef-a0af-c98d8ee52a94", 00:22:20.685 "strip_size_kb": 64, 00:22:20.685 "state": "online", 00:22:20.685 "raid_level": "concat", 00:22:20.685 "superblock": true, 00:22:20.685 "num_base_bdevs": 3, 00:22:20.685 "num_base_bdevs_discovered": 3, 00:22:20.685 "num_base_bdevs_operational": 3, 00:22:20.685 "base_bdevs_list": [ 00:22:20.685 { 00:22:20.685 "name": "pt1", 00:22:20.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:20.685 "is_configured": true, 00:22:20.685 "data_offset": 2048, 00:22:20.685 "data_size": 63488 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "name": "pt2", 00:22:20.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.685 "is_configured": true, 00:22:20.685 "data_offset": 2048, 00:22:20.685 "data_size": 63488 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "name": "pt3", 00:22:20.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.685 "is_configured": true, 00:22:20.685 "data_offset": 2048, 00:22:20.685 "data_size": 63488 00:22:20.685 } 00:22:20.685 ] 00:22:20.685 } 00:22:20.685 } 00:22:20.685 }' 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:20.685 pt2 00:22:20.685 pt3' 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.685 "name": "pt1", 00:22:20.685 "aliases": [ 00:22:20.685 "00000000-0000-0000-0000-000000000001" 00:22:20.685 ], 00:22:20.685 "product_name": "passthru", 00:22:20.685 "block_size": 512, 00:22:20.685 "num_blocks": 65536, 00:22:20.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:20.685 "assigned_rate_limits": { 00:22:20.685 "rw_ios_per_sec": 0, 00:22:20.685 "rw_mbytes_per_sec": 0, 00:22:20.685 "r_mbytes_per_sec": 0, 00:22:20.685 "w_mbytes_per_sec": 0 00:22:20.685 }, 00:22:20.685 "claimed": true, 00:22:20.685 "claim_type": "exclusive_write", 00:22:20.685 "zoned": false, 00:22:20.685 "supported_io_types": { 00:22:20.685 "read": true, 00:22:20.685 "write": true, 00:22:20.685 "unmap": true, 00:22:20.685 "flush": true, 00:22:20.685 "reset": true, 00:22:20.685 "nvme_admin": false, 00:22:20.685 "nvme_io": false, 00:22:20.685 "nvme_io_md": false, 00:22:20.685 "write_zeroes": true, 00:22:20.685 "zcopy": true, 00:22:20.685 "get_zone_info": false, 00:22:20.685 "zone_management": false, 00:22:20.685 "zone_append": false, 00:22:20.685 "compare": false, 00:22:20.685 "compare_and_write": false, 00:22:20.685 "abort": true, 00:22:20.685 "seek_hole": false, 00:22:20.685 "seek_data": false, 00:22:20.685 "copy": true, 00:22:20.685 "nvme_iov_md": false 00:22:20.685 }, 00:22:20.685 "memory_domains": [ 00:22:20.685 { 00:22:20.685 "dma_device_id": "system", 00:22:20.685 "dma_device_type": 1 00:22:20.685 }, 00:22:20.685 { 00:22:20.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.685 "dma_device_type": 2 00:22:20.685 } 00:22:20.685 ], 00:22:20.685 "driver_specific": { 00:22:20.685 "passthru": { 00:22:20.685 "name": "pt1", 00:22:20.685 "base_bdev_name": "malloc1" 00:22:20.685 } 00:22:20.685 } 00:22:20.685 }' 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.685 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:20.944 09:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.944 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.944 "name": "pt2", 00:22:20.944 "aliases": [ 00:22:20.944 "00000000-0000-0000-0000-000000000002" 00:22:20.944 ], 00:22:20.944 "product_name": "passthru", 00:22:20.944 "block_size": 512, 00:22:20.944 "num_blocks": 65536, 00:22:20.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.944 "assigned_rate_limits": { 00:22:20.944 "rw_ios_per_sec": 0, 00:22:20.944 "rw_mbytes_per_sec": 0, 00:22:20.944 "r_mbytes_per_sec": 0, 00:22:20.944 "w_mbytes_per_sec": 0 00:22:20.944 }, 00:22:20.944 "claimed": true, 00:22:20.944 "claim_type": "exclusive_write", 00:22:20.944 "zoned": false, 00:22:20.944 "supported_io_types": { 00:22:20.944 "read": true, 00:22:20.944 "write": true, 00:22:20.944 "unmap": true, 00:22:20.944 "flush": true, 00:22:20.944 "reset": true, 00:22:20.944 "nvme_admin": false, 00:22:20.944 "nvme_io": false, 00:22:20.944 "nvme_io_md": false, 00:22:20.944 "write_zeroes": true, 00:22:20.944 "zcopy": true, 00:22:20.944 "get_zone_info": false, 00:22:20.944 "zone_management": false, 00:22:20.944 "zone_append": false, 00:22:20.944 "compare": false, 00:22:20.944 "compare_and_write": false, 00:22:20.944 "abort": true, 00:22:20.944 "seek_hole": false, 00:22:20.944 "seek_data": false, 00:22:20.944 "copy": true, 00:22:20.944 "nvme_iov_md": false 00:22:20.944 }, 00:22:20.944 "memory_domains": [ 00:22:20.944 { 00:22:20.944 "dma_device_id": "system", 00:22:20.944 "dma_device_type": 1 00:22:20.944 }, 00:22:20.944 { 00:22:20.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.944 "dma_device_type": 2 00:22:20.944 } 00:22:20.944 ], 00:22:20.944 "driver_specific": { 00:22:20.944 "passthru": { 00:22:20.944 "name": "pt2", 00:22:20.944 "base_bdev_name": "malloc2" 00:22:20.944 } 00:22:20.944 } 00:22:20.944 }' 00:22:20.944 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.944 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.944 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.944 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:21.202 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.462 "name": "pt3", 00:22:21.462 "aliases": [ 00:22:21.462 "00000000-0000-0000-0000-000000000003" 00:22:21.462 ], 00:22:21.462 "product_name": "passthru", 00:22:21.462 "block_size": 512, 00:22:21.462 "num_blocks": 65536, 00:22:21.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.462 "assigned_rate_limits": { 00:22:21.462 "rw_ios_per_sec": 0, 00:22:21.462 "rw_mbytes_per_sec": 0, 00:22:21.462 "r_mbytes_per_sec": 0, 00:22:21.462 "w_mbytes_per_sec": 0 00:22:21.462 }, 00:22:21.462 "claimed": true, 00:22:21.462 "claim_type": "exclusive_write", 00:22:21.462 "zoned": false, 00:22:21.462 "supported_io_types": { 00:22:21.462 "read": true, 00:22:21.462 "write": true, 00:22:21.462 "unmap": true, 00:22:21.462 "flush": true, 00:22:21.462 "reset": true, 00:22:21.462 "nvme_admin": false, 00:22:21.462 "nvme_io": false, 00:22:21.462 "nvme_io_md": false, 00:22:21.462 "write_zeroes": true, 00:22:21.462 "zcopy": true, 00:22:21.462 "get_zone_info": false, 00:22:21.462 "zone_management": false, 00:22:21.462 "zone_append": false, 00:22:21.462 "compare": false, 00:22:21.462 "compare_and_write": false, 00:22:21.462 "abort": true, 00:22:21.462 "seek_hole": false, 00:22:21.462 "seek_data": false, 00:22:21.462 "copy": true, 00:22:21.462 "nvme_iov_md": false 00:22:21.462 }, 00:22:21.462 "memory_domains": [ 00:22:21.462 { 00:22:21.462 "dma_device_id": "system", 00:22:21.462 "dma_device_type": 1 00:22:21.462 }, 00:22:21.462 { 00:22:21.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.462 "dma_device_type": 2 00:22:21.462 } 00:22:21.462 ], 00:22:21.462 "driver_specific": { 00:22:21.462 "passthru": { 00:22:21.462 "name": "pt3", 00:22:21.462 "base_bdev_name": "malloc3" 00:22:21.462 } 00:22:21.462 } 00:22:21.462 }' 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:21.462 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:21.722 [2024-07-15 09:48:49.572120] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6b067130-428f-11ef-a0af-c98d8ee52a94 '!=' 6b067130-428f-11ef-a0af-c98d8ee52a94 ']' 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55389 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55389 ']' 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55389 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55389 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:22:21.722 killing process with pid 55389 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55389' 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55389 00:22:21.722 [2024-07-15 09:48:49.605241] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:21.722 [2024-07-15 09:48:49.605258] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.722 [2024-07-15 09:48:49.605280] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.722 [2024-07-15 09:48:49.605283] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x195dd3834780 name raid_bdev1, state offline 00:22:21.722 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55389 00:22:21.722 [2024-07-15 09:48:49.630870] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.980 09:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:21.980 00:22:21.980 real 0m10.803s 00:22:21.980 user 0m18.558s 00:22:21.980 sys 0m2.233s 00:22:21.980 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:21.980 09:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.980 ************************************ 00:22:21.980 END TEST raid_superblock_test 00:22:21.980 ************************************ 00:22:21.980 09:48:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:21.980 09:48:49 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:22:21.980 09:48:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:21.980 09:48:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.980 09:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.980 ************************************ 00:22:21.980 START TEST raid_read_error_test 00:22:21.980 ************************************ 00:22:21.980 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:22:21.980 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Q5Tz8PLpjD 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55736 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55736 /var/tmp/spdk-raid.sock 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55736 ']' 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.981 09:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:21.981 [2024-07-15 09:48:49.961061] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:21.981 [2024-07-15 09:48:49.961324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:22:22.578 EAL: TSC is not safe to use in SMP mode 00:22:22.578 EAL: TSC is not invariant 00:22:22.578 [2024-07-15 09:48:50.680484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.836 [2024-07-15 09:48:50.796736] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:22.836 [2024-07-15 09:48:50.799282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.836 [2024-07-15 09:48:50.800029] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.836 [2024-07-15 09:48:50.800041] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.094 09:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.094 09:48:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:23.094 09:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:23.094 09:48:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:23.094 BaseBdev1_malloc 00:22:23.094 09:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:23.352 true 00:22:23.352 09:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:23.611 [2024-07-15 09:48:51.570926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:23.611 [2024-07-15 09:48:51.571002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.611 [2024-07-15 09:48:51.571035] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13aa37634780 00:22:23.611 [2024-07-15 09:48:51.571042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.611 [2024-07-15 09:48:51.571818] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.611 [2024-07-15 09:48:51.571851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:23.611 BaseBdev1 00:22:23.611 09:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:23.611 09:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:23.870 BaseBdev2_malloc 00:22:23.870 09:48:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:24.130 true 00:22:24.130 09:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:24.130 [2024-07-15 09:48:52.190962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:24.130 [2024-07-15 09:48:52.191031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.130 [2024-07-15 09:48:52.191065] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13aa37634c80 00:22:24.130 [2024-07-15 09:48:52.191072] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.130 [2024-07-15 09:48:52.191871] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.130 [2024-07-15 09:48:52.191902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:24.130 BaseBdev2 00:22:24.130 09:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:24.130 09:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:24.389 BaseBdev3_malloc 00:22:24.389 09:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:24.647 true 00:22:24.647 09:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:24.905 [2024-07-15 09:48:52.827012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:24.905 [2024-07-15 09:48:52.827104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.905 [2024-07-15 09:48:52.827137] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13aa37635180 00:22:24.905 [2024-07-15 09:48:52.827145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.905 [2024-07-15 09:48:52.827941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.905 [2024-07-15 09:48:52.827974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:24.905 BaseBdev3 00:22:24.905 09:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:25.163 [2024-07-15 09:48:53.051039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.163 [2024-07-15 09:48:53.051775] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.163 [2024-07-15 09:48:53.051806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.163 [2024-07-15 09:48:53.051863] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13aa37635400 00:22:25.163 [2024-07-15 09:48:53.051869] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:25.163 [2024-07-15 09:48:53.051911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13aa376a0e20 00:22:25.163 [2024-07-15 09:48:53.051984] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13aa37635400 00:22:25.163 [2024-07-15 09:48:53.051987] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13aa37635400 00:22:25.163 [2024-07-15 09:48:53.052013] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.163 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.421 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.421 "name": "raid_bdev1", 00:22:25.421 "uuid": "71e739c5-428f-11ef-a0af-c98d8ee52a94", 00:22:25.421 "strip_size_kb": 64, 00:22:25.421 "state": "online", 00:22:25.421 "raid_level": "concat", 00:22:25.421 "superblock": true, 00:22:25.421 "num_base_bdevs": 3, 00:22:25.421 "num_base_bdevs_discovered": 3, 00:22:25.421 "num_base_bdevs_operational": 3, 00:22:25.421 "base_bdevs_list": [ 00:22:25.421 { 00:22:25.421 "name": "BaseBdev1", 00:22:25.421 "uuid": "b4c2566c-90c2-3050-a2f1-bbdd2c9df82a", 00:22:25.421 "is_configured": true, 00:22:25.421 "data_offset": 2048, 00:22:25.421 "data_size": 63488 00:22:25.421 }, 00:22:25.421 { 00:22:25.421 "name": "BaseBdev2", 00:22:25.421 "uuid": "65d4b33b-5aa7-cf52-9360-cb62e1a7db3e", 00:22:25.421 "is_configured": true, 00:22:25.421 "data_offset": 2048, 00:22:25.421 "data_size": 63488 00:22:25.421 }, 00:22:25.421 { 00:22:25.421 "name": "BaseBdev3", 00:22:25.421 "uuid": "c51110fc-46a4-6d54-a1c9-fb25be211616", 00:22:25.421 "is_configured": true, 00:22:25.421 "data_offset": 2048, 00:22:25.421 "data_size": 63488 00:22:25.421 } 00:22:25.421 ] 00:22:25.421 }' 00:22:25.421 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.421 09:48:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.680 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:25.680 09:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:25.680 [2024-07-15 09:48:53.727195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13aa376a0ec0 00:22:26.617 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.876 09:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.135 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.135 "name": "raid_bdev1", 00:22:27.135 "uuid": "71e739c5-428f-11ef-a0af-c98d8ee52a94", 00:22:27.135 "strip_size_kb": 64, 00:22:27.135 "state": "online", 00:22:27.135 "raid_level": "concat", 00:22:27.135 "superblock": true, 00:22:27.135 "num_base_bdevs": 3, 00:22:27.135 "num_base_bdevs_discovered": 3, 00:22:27.135 "num_base_bdevs_operational": 3, 00:22:27.135 "base_bdevs_list": [ 00:22:27.135 { 00:22:27.135 "name": "BaseBdev1", 00:22:27.135 "uuid": "b4c2566c-90c2-3050-a2f1-bbdd2c9df82a", 00:22:27.135 "is_configured": true, 00:22:27.135 "data_offset": 2048, 00:22:27.135 "data_size": 63488 00:22:27.135 }, 00:22:27.135 { 00:22:27.135 "name": "BaseBdev2", 00:22:27.135 "uuid": "65d4b33b-5aa7-cf52-9360-cb62e1a7db3e", 00:22:27.135 "is_configured": true, 00:22:27.135 "data_offset": 2048, 00:22:27.135 "data_size": 63488 00:22:27.135 }, 00:22:27.135 { 00:22:27.135 "name": "BaseBdev3", 00:22:27.135 "uuid": "c51110fc-46a4-6d54-a1c9-fb25be211616", 00:22:27.135 "is_configured": true, 00:22:27.135 "data_offset": 2048, 00:22:27.135 "data_size": 63488 00:22:27.135 } 00:22:27.135 ] 00:22:27.135 }' 00:22:27.135 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.135 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.395 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:27.655 [2024-07-15 09:48:55.633490] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:27.655 [2024-07-15 09:48:55.633518] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:27.655 [2024-07-15 09:48:55.633842] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.655 [2024-07-15 09:48:55.633852] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.655 [2024-07-15 09:48:55.633861] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.655 [2024-07-15 09:48:55.633865] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13aa37635400 name raid_bdev1, state offline 00:22:27.655 0 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55736 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55736 ']' 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55736 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55736 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:22:27.655 killing process with pid 55736 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55736' 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55736 00:22:27.655 [2024-07-15 09:48:55.664945] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:27.655 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55736 00:22:27.655 [2024-07-15 09:48:55.690173] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Q5Tz8PLpjD 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:22:27.933 00:22:27.933 real 0m6.011s 00:22:27.933 user 0m8.865s 00:22:27.933 sys 0m1.334s 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.933 09:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.933 ************************************ 00:22:27.933 END TEST raid_read_error_test 00:22:27.933 ************************************ 00:22:27.933 09:48:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:27.933 09:48:55 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:22:27.933 09:48:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:27.933 09:48:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.933 09:48:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.933 ************************************ 00:22:27.933 START TEST raid_write_error_test 00:22:27.933 ************************************ 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WZkeuaSm7z 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55867 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55867 /var/tmp/spdk-raid.sock 00:22:27.933 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 55867 ']' 00:22:28.226 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:28.226 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:28.226 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:28.226 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:28.226 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.226 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.226 [2024-07-15 09:48:56.031694] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:28.226 [2024-07-15 09:48:56.031950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:22:28.793 EAL: TSC is not safe to use in SMP mode 00:22:28.793 EAL: TSC is not invariant 00:22:28.793 [2024-07-15 09:48:56.739173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.793 [2024-07-15 09:48:56.853322] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:28.793 [2024-07-15 09:48:56.855856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.793 [2024-07-15 09:48:56.859482] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.793 [2024-07-15 09:48:56.859491] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.051 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.051 09:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:29.051 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:29.051 09:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:29.310 BaseBdev1_malloc 00:22:29.310 09:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:29.310 true 00:22:29.310 09:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:29.570 [2024-07-15 09:48:57.590536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:29.570 [2024-07-15 09:48:57.590613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.570 [2024-07-15 09:48:57.590644] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f64adc34780 00:22:29.570 [2024-07-15 09:48:57.590651] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.570 [2024-07-15 09:48:57.591392] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.570 [2024-07-15 09:48:57.591420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:29.570 BaseBdev1 00:22:29.570 09:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:29.570 09:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:29.830 BaseBdev2_malloc 00:22:29.830 09:48:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:30.089 true 00:22:30.089 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:30.089 [2024-07-15 09:48:58.178562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:30.089 [2024-07-15 09:48:58.178621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.089 [2024-07-15 09:48:58.178652] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f64adc34c80 00:22:30.089 [2024-07-15 09:48:58.178659] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.089 [2024-07-15 09:48:58.179318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.089 [2024-07-15 09:48:58.179347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:30.089 BaseBdev2 00:22:30.348 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:30.348 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:30.348 BaseBdev3_malloc 00:22:30.348 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:30.606 true 00:22:30.606 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:30.866 [2024-07-15 09:48:58.766591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:30.866 [2024-07-15 09:48:58.766651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.866 [2024-07-15 09:48:58.766681] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f64adc35180 00:22:30.866 [2024-07-15 09:48:58.766687] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.866 [2024-07-15 09:48:58.767346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.866 [2024-07-15 09:48:58.767376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:30.866 BaseBdev3 00:22:30.866 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:31.124 [2024-07-15 09:48:58.978624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.124 [2024-07-15 09:48:58.979239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.124 [2024-07-15 09:48:58.979259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:31.124 [2024-07-15 09:48:58.979312] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f64adc35400 00:22:31.124 [2024-07-15 09:48:58.979317] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:31.124 [2024-07-15 09:48:58.979356] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f64adca0e20 00:22:31.124 [2024-07-15 09:48:58.979421] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f64adc35400 00:22:31.124 [2024-07-15 09:48:58.979424] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f64adc35400 00:22:31.124 [2024-07-15 09:48:58.979443] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.124 09:48:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.124 09:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.124 09:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.124 "name": "raid_bdev1", 00:22:31.124 "uuid": "756fb427-428f-11ef-a0af-c98d8ee52a94", 00:22:31.124 "strip_size_kb": 64, 00:22:31.124 "state": "online", 00:22:31.124 "raid_level": "concat", 00:22:31.124 "superblock": true, 00:22:31.124 "num_base_bdevs": 3, 00:22:31.124 "num_base_bdevs_discovered": 3, 00:22:31.124 "num_base_bdevs_operational": 3, 00:22:31.124 "base_bdevs_list": [ 00:22:31.124 { 00:22:31.124 "name": "BaseBdev1", 00:22:31.124 "uuid": "b16c4f90-03ba-ec5d-a67a-bb5f2c738496", 00:22:31.124 "is_configured": true, 00:22:31.124 "data_offset": 2048, 00:22:31.124 "data_size": 63488 00:22:31.124 }, 00:22:31.124 { 00:22:31.124 "name": "BaseBdev2", 00:22:31.124 "uuid": "bf3d9887-ce51-1958-98ca-aeadbd685b29", 00:22:31.124 "is_configured": true, 00:22:31.124 "data_offset": 2048, 00:22:31.124 "data_size": 63488 00:22:31.124 }, 00:22:31.124 { 00:22:31.124 "name": "BaseBdev3", 00:22:31.124 "uuid": "8c54f68a-0359-1e57-9ec5-c5ce10291b18", 00:22:31.124 "is_configured": true, 00:22:31.124 "data_offset": 2048, 00:22:31.124 "data_size": 63488 00:22:31.124 } 00:22:31.124 ] 00:22:31.124 }' 00:22:31.124 09:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.124 09:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.381 09:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:31.381 09:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:31.639 [2024-07-15 09:48:59.586760] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f64adca0ec0 00:22:32.576 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.835 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.094 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.095 "name": "raid_bdev1", 00:22:33.095 "uuid": "756fb427-428f-11ef-a0af-c98d8ee52a94", 00:22:33.095 "strip_size_kb": 64, 00:22:33.095 "state": "online", 00:22:33.095 "raid_level": "concat", 00:22:33.095 "superblock": true, 00:22:33.095 "num_base_bdevs": 3, 00:22:33.095 "num_base_bdevs_discovered": 3, 00:22:33.095 "num_base_bdevs_operational": 3, 00:22:33.095 "base_bdevs_list": [ 00:22:33.095 { 00:22:33.095 "name": "BaseBdev1", 00:22:33.095 "uuid": "b16c4f90-03ba-ec5d-a67a-bb5f2c738496", 00:22:33.095 "is_configured": true, 00:22:33.095 "data_offset": 2048, 00:22:33.095 "data_size": 63488 00:22:33.095 }, 00:22:33.095 { 00:22:33.095 "name": "BaseBdev2", 00:22:33.095 "uuid": "bf3d9887-ce51-1958-98ca-aeadbd685b29", 00:22:33.095 "is_configured": true, 00:22:33.095 "data_offset": 2048, 00:22:33.095 "data_size": 63488 00:22:33.095 }, 00:22:33.095 { 00:22:33.095 "name": "BaseBdev3", 00:22:33.095 "uuid": "8c54f68a-0359-1e57-9ec5-c5ce10291b18", 00:22:33.095 "is_configured": true, 00:22:33.095 "data_offset": 2048, 00:22:33.095 "data_size": 63488 00:22:33.095 } 00:22:33.095 ] 00:22:33.095 }' 00:22:33.095 09:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.095 09:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.354 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:33.354 [2024-07-15 09:49:01.453623] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.354 [2024-07-15 09:49:01.453656] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.354 [2024-07-15 09:49:01.453973] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.354 [2024-07-15 09:49:01.453983] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.354 [2024-07-15 09:49:01.453992] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.354 [2024-07-15 09:49:01.453996] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f64adc35400 name raid_bdev1, state offline 00:22:33.614 0 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55867 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 55867 ']' 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 55867 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55867 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:22:33.614 killing process with pid 55867 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55867' 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 55867 00:22:33.614 [2024-07-15 09:49:01.484245] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.614 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 55867 00:22:33.614 [2024-07-15 09:49:01.509032] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WZkeuaSm7z 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:22:33.874 00:22:33.874 real 0m5.763s 00:22:33.874 user 0m8.255s 00:22:33.874 sys 0m1.449s 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.874 09:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.874 ************************************ 00:22:33.874 END TEST raid_write_error_test 00:22:33.874 ************************************ 00:22:33.874 09:49:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:33.874 09:49:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:33.874 09:49:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:22:33.874 09:49:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:33.874 09:49:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.874 09:49:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.874 ************************************ 00:22:33.874 START TEST raid_state_function_test 00:22:33.874 ************************************ 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:33.874 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=55992 00:22:33.875 Process raid pid: 55992 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 55992' 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 55992 /var/tmp/spdk-raid.sock 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 55992 ']' 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.875 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.875 [2024-07-15 09:49:01.843301] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:33.875 [2024-07-15 09:49:01.843521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:22:34.813 EAL: TSC is not safe to use in SMP mode 00:22:34.813 EAL: TSC is not invariant 00:22:34.813 [2024-07-15 09:49:02.560416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.813 [2024-07-15 09:49:02.674285] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:34.813 [2024-07-15 09:49:02.676665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.813 [2024-07-15 09:49:02.677353] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.813 [2024-07-15 09:49:02.677364] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.813 09:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.813 09:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:34.813 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:35.072 [2024-07-15 09:49:02.952147] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:35.072 [2024-07-15 09:49:02.952207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:35.072 [2024-07-15 09:49:02.952212] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:35.072 [2024-07-15 09:49:02.952218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:35.072 [2024-07-15 09:49:02.952221] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:35.072 [2024-07-15 09:49:02.952227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.072 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.073 09:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.333 09:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.333 "name": "Existed_Raid", 00:22:35.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.333 "strip_size_kb": 0, 00:22:35.333 "state": "configuring", 00:22:35.333 "raid_level": "raid1", 00:22:35.333 "superblock": false, 00:22:35.333 "num_base_bdevs": 3, 00:22:35.333 "num_base_bdevs_discovered": 0, 00:22:35.333 "num_base_bdevs_operational": 3, 00:22:35.333 "base_bdevs_list": [ 00:22:35.333 { 00:22:35.333 "name": "BaseBdev1", 00:22:35.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.333 "is_configured": false, 00:22:35.333 "data_offset": 0, 00:22:35.333 "data_size": 0 00:22:35.333 }, 00:22:35.333 { 00:22:35.333 "name": "BaseBdev2", 00:22:35.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.333 "is_configured": false, 00:22:35.333 "data_offset": 0, 00:22:35.333 "data_size": 0 00:22:35.333 }, 00:22:35.333 { 00:22:35.333 "name": "BaseBdev3", 00:22:35.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.333 "is_configured": false, 00:22:35.333 "data_offset": 0, 00:22:35.333 "data_size": 0 00:22:35.333 } 00:22:35.333 ] 00:22:35.333 }' 00:22:35.333 09:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.333 09:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.591 09:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:35.850 [2024-07-15 09:49:03.712164] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:35.850 [2024-07-15 09:49:03.712193] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2da0fa434500 name Existed_Raid, state configuring 00:22:35.850 09:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:35.850 [2024-07-15 09:49:03.940182] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:35.850 [2024-07-15 09:49:03.940233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:35.850 [2024-07-15 09:49:03.940236] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:35.850 [2024-07-15 09:49:03.940243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:35.850 [2024-07-15 09:49:03.940246] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:35.850 [2024-07-15 09:49:03.940252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:36.109 09:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:36.109 [2024-07-15 09:49:04.129312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.109 BaseBdev1 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:36.109 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:36.368 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:36.627 [ 00:22:36.627 { 00:22:36.627 "name": "BaseBdev1", 00:22:36.627 "aliases": [ 00:22:36.627 "788177f0-428f-11ef-a0af-c98d8ee52a94" 00:22:36.627 ], 00:22:36.627 "product_name": "Malloc disk", 00:22:36.627 "block_size": 512, 00:22:36.627 "num_blocks": 65536, 00:22:36.627 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:36.627 "assigned_rate_limits": { 00:22:36.627 "rw_ios_per_sec": 0, 00:22:36.627 "rw_mbytes_per_sec": 0, 00:22:36.627 "r_mbytes_per_sec": 0, 00:22:36.627 "w_mbytes_per_sec": 0 00:22:36.627 }, 00:22:36.627 "claimed": true, 00:22:36.627 "claim_type": "exclusive_write", 00:22:36.627 "zoned": false, 00:22:36.627 "supported_io_types": { 00:22:36.627 "read": true, 00:22:36.627 "write": true, 00:22:36.627 "unmap": true, 00:22:36.627 "flush": true, 00:22:36.627 "reset": true, 00:22:36.627 "nvme_admin": false, 00:22:36.627 "nvme_io": false, 00:22:36.627 "nvme_io_md": false, 00:22:36.627 "write_zeroes": true, 00:22:36.627 "zcopy": true, 00:22:36.627 "get_zone_info": false, 00:22:36.627 "zone_management": false, 00:22:36.627 "zone_append": false, 00:22:36.627 "compare": false, 00:22:36.627 "compare_and_write": false, 00:22:36.627 "abort": true, 00:22:36.627 "seek_hole": false, 00:22:36.627 "seek_data": false, 00:22:36.627 "copy": true, 00:22:36.627 "nvme_iov_md": false 00:22:36.627 }, 00:22:36.627 "memory_domains": [ 00:22:36.627 { 00:22:36.627 "dma_device_id": "system", 00:22:36.627 "dma_device_type": 1 00:22:36.627 }, 00:22:36.627 { 00:22:36.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.627 "dma_device_type": 2 00:22:36.627 } 00:22:36.627 ], 00:22:36.627 "driver_specific": {} 00:22:36.627 } 00:22:36.627 ] 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.627 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.886 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.886 "name": "Existed_Raid", 00:22:36.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.886 "strip_size_kb": 0, 00:22:36.886 "state": "configuring", 00:22:36.886 "raid_level": "raid1", 00:22:36.886 "superblock": false, 00:22:36.886 "num_base_bdevs": 3, 00:22:36.886 "num_base_bdevs_discovered": 1, 00:22:36.886 "num_base_bdevs_operational": 3, 00:22:36.886 "base_bdevs_list": [ 00:22:36.886 { 00:22:36.886 "name": "BaseBdev1", 00:22:36.886 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:36.886 "is_configured": true, 00:22:36.886 "data_offset": 0, 00:22:36.886 "data_size": 65536 00:22:36.886 }, 00:22:36.886 { 00:22:36.886 "name": "BaseBdev2", 00:22:36.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.886 "is_configured": false, 00:22:36.886 "data_offset": 0, 00:22:36.886 "data_size": 0 00:22:36.886 }, 00:22:36.886 { 00:22:36.886 "name": "BaseBdev3", 00:22:36.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.886 "is_configured": false, 00:22:36.886 "data_offset": 0, 00:22:36.886 "data_size": 0 00:22:36.886 } 00:22:36.886 ] 00:22:36.886 }' 00:22:36.886 09:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.886 09:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.146 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:37.405 [2024-07-15 09:49:05.324258] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.405 [2024-07-15 09:49:05.324287] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2da0fa434500 name Existed_Raid, state configuring 00:22:37.405 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:37.665 [2024-07-15 09:49:05.540277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.665 [2024-07-15 09:49:05.541166] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.665 [2024-07-15 09:49:05.541213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.665 [2024-07-15 09:49:05.541218] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:37.665 [2024-07-15 09:49:05.541225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.665 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.924 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.924 "name": "Existed_Raid", 00:22:37.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.924 "strip_size_kb": 0, 00:22:37.924 "state": "configuring", 00:22:37.924 "raid_level": "raid1", 00:22:37.924 "superblock": false, 00:22:37.924 "num_base_bdevs": 3, 00:22:37.924 "num_base_bdevs_discovered": 1, 00:22:37.924 "num_base_bdevs_operational": 3, 00:22:37.924 "base_bdevs_list": [ 00:22:37.924 { 00:22:37.924 "name": "BaseBdev1", 00:22:37.924 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:37.924 "is_configured": true, 00:22:37.924 "data_offset": 0, 00:22:37.924 "data_size": 65536 00:22:37.924 }, 00:22:37.924 { 00:22:37.924 "name": "BaseBdev2", 00:22:37.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.924 "is_configured": false, 00:22:37.924 "data_offset": 0, 00:22:37.924 "data_size": 0 00:22:37.924 }, 00:22:37.924 { 00:22:37.924 "name": "BaseBdev3", 00:22:37.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.924 "is_configured": false, 00:22:37.924 "data_offset": 0, 00:22:37.924 "data_size": 0 00:22:37.924 } 00:22:37.924 ] 00:22:37.924 }' 00:22:37.924 09:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.924 09:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:38.183 [2024-07-15 09:49:06.244455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.183 BaseBdev2 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:38.183 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:38.441 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:38.700 [ 00:22:38.700 { 00:22:38.700 "name": "BaseBdev2", 00:22:38.700 "aliases": [ 00:22:38.700 "79c45cdb-428f-11ef-a0af-c98d8ee52a94" 00:22:38.700 ], 00:22:38.700 "product_name": "Malloc disk", 00:22:38.700 "block_size": 512, 00:22:38.700 "num_blocks": 65536, 00:22:38.700 "uuid": "79c45cdb-428f-11ef-a0af-c98d8ee52a94", 00:22:38.700 "assigned_rate_limits": { 00:22:38.700 "rw_ios_per_sec": 0, 00:22:38.700 "rw_mbytes_per_sec": 0, 00:22:38.700 "r_mbytes_per_sec": 0, 00:22:38.700 "w_mbytes_per_sec": 0 00:22:38.700 }, 00:22:38.700 "claimed": true, 00:22:38.700 "claim_type": "exclusive_write", 00:22:38.700 "zoned": false, 00:22:38.700 "supported_io_types": { 00:22:38.700 "read": true, 00:22:38.700 "write": true, 00:22:38.700 "unmap": true, 00:22:38.700 "flush": true, 00:22:38.700 "reset": true, 00:22:38.700 "nvme_admin": false, 00:22:38.700 "nvme_io": false, 00:22:38.700 "nvme_io_md": false, 00:22:38.700 "write_zeroes": true, 00:22:38.700 "zcopy": true, 00:22:38.700 "get_zone_info": false, 00:22:38.700 "zone_management": false, 00:22:38.700 "zone_append": false, 00:22:38.700 "compare": false, 00:22:38.700 "compare_and_write": false, 00:22:38.700 "abort": true, 00:22:38.700 "seek_hole": false, 00:22:38.700 "seek_data": false, 00:22:38.700 "copy": true, 00:22:38.700 "nvme_iov_md": false 00:22:38.700 }, 00:22:38.700 "memory_domains": [ 00:22:38.700 { 00:22:38.700 "dma_device_id": "system", 00:22:38.700 "dma_device_type": 1 00:22:38.700 }, 00:22:38.700 { 00:22:38.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.700 "dma_device_type": 2 00:22:38.700 } 00:22:38.700 ], 00:22:38.700 "driver_specific": {} 00:22:38.700 } 00:22:38.700 ] 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.700 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.962 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.962 "name": "Existed_Raid", 00:22:38.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.962 "strip_size_kb": 0, 00:22:38.962 "state": "configuring", 00:22:38.962 "raid_level": "raid1", 00:22:38.962 "superblock": false, 00:22:38.962 "num_base_bdevs": 3, 00:22:38.962 "num_base_bdevs_discovered": 2, 00:22:38.962 "num_base_bdevs_operational": 3, 00:22:38.962 "base_bdevs_list": [ 00:22:38.962 { 00:22:38.962 "name": "BaseBdev1", 00:22:38.962 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:38.962 "is_configured": true, 00:22:38.962 "data_offset": 0, 00:22:38.962 "data_size": 65536 00:22:38.962 }, 00:22:38.962 { 00:22:38.962 "name": "BaseBdev2", 00:22:38.962 "uuid": "79c45cdb-428f-11ef-a0af-c98d8ee52a94", 00:22:38.962 "is_configured": true, 00:22:38.962 "data_offset": 0, 00:22:38.962 "data_size": 65536 00:22:38.962 }, 00:22:38.962 { 00:22:38.962 "name": "BaseBdev3", 00:22:38.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.962 "is_configured": false, 00:22:38.962 "data_offset": 0, 00:22:38.962 "data_size": 0 00:22:38.962 } 00:22:38.962 ] 00:22:38.962 }' 00:22:38.962 09:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.962 09:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.280 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:39.280 [2024-07-15 09:49:07.320490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:39.280 [2024-07-15 09:49:07.320515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2da0fa434a00 00:22:39.281 [2024-07-15 09:49:07.320519] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:39.281 [2024-07-15 09:49:07.320538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2da0fa497e20 00:22:39.281 [2024-07-15 09:49:07.320639] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2da0fa434a00 00:22:39.281 [2024-07-15 09:49:07.320642] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2da0fa434a00 00:22:39.281 [2024-07-15 09:49:07.320669] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.281 BaseBdev3 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:39.281 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:39.540 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:39.797 [ 00:22:39.797 { 00:22:39.797 "name": "BaseBdev3", 00:22:39.797 "aliases": [ 00:22:39.797 "7a688e52-428f-11ef-a0af-c98d8ee52a94" 00:22:39.797 ], 00:22:39.797 "product_name": "Malloc disk", 00:22:39.797 "block_size": 512, 00:22:39.797 "num_blocks": 65536, 00:22:39.797 "uuid": "7a688e52-428f-11ef-a0af-c98d8ee52a94", 00:22:39.797 "assigned_rate_limits": { 00:22:39.797 "rw_ios_per_sec": 0, 00:22:39.797 "rw_mbytes_per_sec": 0, 00:22:39.797 "r_mbytes_per_sec": 0, 00:22:39.797 "w_mbytes_per_sec": 0 00:22:39.798 }, 00:22:39.798 "claimed": true, 00:22:39.798 "claim_type": "exclusive_write", 00:22:39.798 "zoned": false, 00:22:39.798 "supported_io_types": { 00:22:39.798 "read": true, 00:22:39.798 "write": true, 00:22:39.798 "unmap": true, 00:22:39.798 "flush": true, 00:22:39.798 "reset": true, 00:22:39.798 "nvme_admin": false, 00:22:39.798 "nvme_io": false, 00:22:39.798 "nvme_io_md": false, 00:22:39.798 "write_zeroes": true, 00:22:39.798 "zcopy": true, 00:22:39.798 "get_zone_info": false, 00:22:39.798 "zone_management": false, 00:22:39.798 "zone_append": false, 00:22:39.798 "compare": false, 00:22:39.798 "compare_and_write": false, 00:22:39.798 "abort": true, 00:22:39.798 "seek_hole": false, 00:22:39.798 "seek_data": false, 00:22:39.798 "copy": true, 00:22:39.798 "nvme_iov_md": false 00:22:39.798 }, 00:22:39.798 "memory_domains": [ 00:22:39.798 { 00:22:39.798 "dma_device_id": "system", 00:22:39.798 "dma_device_type": 1 00:22:39.798 }, 00:22:39.798 { 00:22:39.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.798 "dma_device_type": 2 00:22:39.798 } 00:22:39.798 ], 00:22:39.798 "driver_specific": {} 00:22:39.798 } 00:22:39.798 ] 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.798 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.057 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.057 "name": "Existed_Raid", 00:22:40.057 "uuid": "7a6893f4-428f-11ef-a0af-c98d8ee52a94", 00:22:40.057 "strip_size_kb": 0, 00:22:40.057 "state": "online", 00:22:40.057 "raid_level": "raid1", 00:22:40.057 "superblock": false, 00:22:40.057 "num_base_bdevs": 3, 00:22:40.057 "num_base_bdevs_discovered": 3, 00:22:40.057 "num_base_bdevs_operational": 3, 00:22:40.057 "base_bdevs_list": [ 00:22:40.057 { 00:22:40.057 "name": "BaseBdev1", 00:22:40.057 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:40.057 "is_configured": true, 00:22:40.057 "data_offset": 0, 00:22:40.057 "data_size": 65536 00:22:40.057 }, 00:22:40.057 { 00:22:40.057 "name": "BaseBdev2", 00:22:40.057 "uuid": "79c45cdb-428f-11ef-a0af-c98d8ee52a94", 00:22:40.057 "is_configured": true, 00:22:40.057 "data_offset": 0, 00:22:40.057 "data_size": 65536 00:22:40.057 }, 00:22:40.057 { 00:22:40.057 "name": "BaseBdev3", 00:22:40.057 "uuid": "7a688e52-428f-11ef-a0af-c98d8ee52a94", 00:22:40.057 "is_configured": true, 00:22:40.057 "data_offset": 0, 00:22:40.057 "data_size": 65536 00:22:40.057 } 00:22:40.057 ] 00:22:40.057 }' 00:22:40.057 09:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.057 09:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:40.316 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:40.316 [2024-07-15 09:49:08.404439] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.574 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:40.574 "name": "Existed_Raid", 00:22:40.574 "aliases": [ 00:22:40.574 "7a6893f4-428f-11ef-a0af-c98d8ee52a94" 00:22:40.574 ], 00:22:40.574 "product_name": "Raid Volume", 00:22:40.574 "block_size": 512, 00:22:40.574 "num_blocks": 65536, 00:22:40.574 "uuid": "7a6893f4-428f-11ef-a0af-c98d8ee52a94", 00:22:40.574 "assigned_rate_limits": { 00:22:40.574 "rw_ios_per_sec": 0, 00:22:40.574 "rw_mbytes_per_sec": 0, 00:22:40.574 "r_mbytes_per_sec": 0, 00:22:40.574 "w_mbytes_per_sec": 0 00:22:40.574 }, 00:22:40.574 "claimed": false, 00:22:40.574 "zoned": false, 00:22:40.574 "supported_io_types": { 00:22:40.574 "read": true, 00:22:40.574 "write": true, 00:22:40.574 "unmap": false, 00:22:40.574 "flush": false, 00:22:40.574 "reset": true, 00:22:40.574 "nvme_admin": false, 00:22:40.574 "nvme_io": false, 00:22:40.574 "nvme_io_md": false, 00:22:40.574 "write_zeroes": true, 00:22:40.574 "zcopy": false, 00:22:40.574 "get_zone_info": false, 00:22:40.574 "zone_management": false, 00:22:40.574 "zone_append": false, 00:22:40.574 "compare": false, 00:22:40.574 "compare_and_write": false, 00:22:40.574 "abort": false, 00:22:40.574 "seek_hole": false, 00:22:40.574 "seek_data": false, 00:22:40.574 "copy": false, 00:22:40.574 "nvme_iov_md": false 00:22:40.574 }, 00:22:40.574 "memory_domains": [ 00:22:40.574 { 00:22:40.574 "dma_device_id": "system", 00:22:40.574 "dma_device_type": 1 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.574 "dma_device_type": 2 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "dma_device_id": "system", 00:22:40.574 "dma_device_type": 1 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.574 "dma_device_type": 2 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "dma_device_id": "system", 00:22:40.574 "dma_device_type": 1 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.574 "dma_device_type": 2 00:22:40.574 } 00:22:40.574 ], 00:22:40.574 "driver_specific": { 00:22:40.574 "raid": { 00:22:40.574 "uuid": "7a6893f4-428f-11ef-a0af-c98d8ee52a94", 00:22:40.574 "strip_size_kb": 0, 00:22:40.574 "state": "online", 00:22:40.574 "raid_level": "raid1", 00:22:40.574 "superblock": false, 00:22:40.574 "num_base_bdevs": 3, 00:22:40.574 "num_base_bdevs_discovered": 3, 00:22:40.574 "num_base_bdevs_operational": 3, 00:22:40.574 "base_bdevs_list": [ 00:22:40.574 { 00:22:40.574 "name": "BaseBdev1", 00:22:40.574 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:40.574 "is_configured": true, 00:22:40.574 "data_offset": 0, 00:22:40.574 "data_size": 65536 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "name": "BaseBdev2", 00:22:40.574 "uuid": "79c45cdb-428f-11ef-a0af-c98d8ee52a94", 00:22:40.574 "is_configured": true, 00:22:40.574 "data_offset": 0, 00:22:40.574 "data_size": 65536 00:22:40.574 }, 00:22:40.574 { 00:22:40.574 "name": "BaseBdev3", 00:22:40.574 "uuid": "7a688e52-428f-11ef-a0af-c98d8ee52a94", 00:22:40.574 "is_configured": true, 00:22:40.574 "data_offset": 0, 00:22:40.574 "data_size": 65536 00:22:40.574 } 00:22:40.574 ] 00:22:40.574 } 00:22:40.574 } 00:22:40.574 }' 00:22:40.574 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:40.574 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:40.574 BaseBdev2 00:22:40.574 BaseBdev3' 00:22:40.574 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:40.574 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:40.574 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:40.833 "name": "BaseBdev1", 00:22:40.833 "aliases": [ 00:22:40.833 "788177f0-428f-11ef-a0af-c98d8ee52a94" 00:22:40.833 ], 00:22:40.833 "product_name": "Malloc disk", 00:22:40.833 "block_size": 512, 00:22:40.833 "num_blocks": 65536, 00:22:40.833 "uuid": "788177f0-428f-11ef-a0af-c98d8ee52a94", 00:22:40.833 "assigned_rate_limits": { 00:22:40.833 "rw_ios_per_sec": 0, 00:22:40.833 "rw_mbytes_per_sec": 0, 00:22:40.833 "r_mbytes_per_sec": 0, 00:22:40.833 "w_mbytes_per_sec": 0 00:22:40.833 }, 00:22:40.833 "claimed": true, 00:22:40.833 "claim_type": "exclusive_write", 00:22:40.833 "zoned": false, 00:22:40.833 "supported_io_types": { 00:22:40.833 "read": true, 00:22:40.833 "write": true, 00:22:40.833 "unmap": true, 00:22:40.833 "flush": true, 00:22:40.833 "reset": true, 00:22:40.833 "nvme_admin": false, 00:22:40.833 "nvme_io": false, 00:22:40.833 "nvme_io_md": false, 00:22:40.833 "write_zeroes": true, 00:22:40.833 "zcopy": true, 00:22:40.833 "get_zone_info": false, 00:22:40.833 "zone_management": false, 00:22:40.833 "zone_append": false, 00:22:40.833 "compare": false, 00:22:40.833 "compare_and_write": false, 00:22:40.833 "abort": true, 00:22:40.833 "seek_hole": false, 00:22:40.833 "seek_data": false, 00:22:40.833 "copy": true, 00:22:40.833 "nvme_iov_md": false 00:22:40.833 }, 00:22:40.833 "memory_domains": [ 00:22:40.833 { 00:22:40.833 "dma_device_id": "system", 00:22:40.833 "dma_device_type": 1 00:22:40.833 }, 00:22:40.833 { 00:22:40.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.833 "dma_device_type": 2 00:22:40.833 } 00:22:40.833 ], 00:22:40.833 "driver_specific": {} 00:22:40.833 }' 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:40.833 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:41.091 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:41.091 "name": "BaseBdev2", 00:22:41.091 "aliases": [ 00:22:41.091 "79c45cdb-428f-11ef-a0af-c98d8ee52a94" 00:22:41.091 ], 00:22:41.091 "product_name": "Malloc disk", 00:22:41.091 "block_size": 512, 00:22:41.091 "num_blocks": 65536, 00:22:41.091 "uuid": "79c45cdb-428f-11ef-a0af-c98d8ee52a94", 00:22:41.091 "assigned_rate_limits": { 00:22:41.091 "rw_ios_per_sec": 0, 00:22:41.091 "rw_mbytes_per_sec": 0, 00:22:41.091 "r_mbytes_per_sec": 0, 00:22:41.091 "w_mbytes_per_sec": 0 00:22:41.091 }, 00:22:41.091 "claimed": true, 00:22:41.091 "claim_type": "exclusive_write", 00:22:41.091 "zoned": false, 00:22:41.091 "supported_io_types": { 00:22:41.091 "read": true, 00:22:41.091 "write": true, 00:22:41.091 "unmap": true, 00:22:41.091 "flush": true, 00:22:41.091 "reset": true, 00:22:41.091 "nvme_admin": false, 00:22:41.091 "nvme_io": false, 00:22:41.091 "nvme_io_md": false, 00:22:41.091 "write_zeroes": true, 00:22:41.091 "zcopy": true, 00:22:41.091 "get_zone_info": false, 00:22:41.091 "zone_management": false, 00:22:41.091 "zone_append": false, 00:22:41.091 "compare": false, 00:22:41.091 "compare_and_write": false, 00:22:41.091 "abort": true, 00:22:41.091 "seek_hole": false, 00:22:41.091 "seek_data": false, 00:22:41.091 "copy": true, 00:22:41.091 "nvme_iov_md": false 00:22:41.091 }, 00:22:41.091 "memory_domains": [ 00:22:41.091 { 00:22:41.091 "dma_device_id": "system", 00:22:41.091 "dma_device_type": 1 00:22:41.091 }, 00:22:41.091 { 00:22:41.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.091 "dma_device_type": 2 00:22:41.091 } 00:22:41.091 ], 00:22:41.091 "driver_specific": {} 00:22:41.091 }' 00:22:41.091 09:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.091 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:41.092 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.092 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.092 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:41.092 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:41.092 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:41.092 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:41.350 "name": "BaseBdev3", 00:22:41.350 "aliases": [ 00:22:41.350 "7a688e52-428f-11ef-a0af-c98d8ee52a94" 00:22:41.350 ], 00:22:41.350 "product_name": "Malloc disk", 00:22:41.350 "block_size": 512, 00:22:41.350 "num_blocks": 65536, 00:22:41.350 "uuid": "7a688e52-428f-11ef-a0af-c98d8ee52a94", 00:22:41.350 "assigned_rate_limits": { 00:22:41.350 "rw_ios_per_sec": 0, 00:22:41.350 "rw_mbytes_per_sec": 0, 00:22:41.350 "r_mbytes_per_sec": 0, 00:22:41.350 "w_mbytes_per_sec": 0 00:22:41.350 }, 00:22:41.350 "claimed": true, 00:22:41.350 "claim_type": "exclusive_write", 00:22:41.350 "zoned": false, 00:22:41.350 "supported_io_types": { 00:22:41.350 "read": true, 00:22:41.350 "write": true, 00:22:41.350 "unmap": true, 00:22:41.350 "flush": true, 00:22:41.350 "reset": true, 00:22:41.350 "nvme_admin": false, 00:22:41.350 "nvme_io": false, 00:22:41.350 "nvme_io_md": false, 00:22:41.350 "write_zeroes": true, 00:22:41.350 "zcopy": true, 00:22:41.350 "get_zone_info": false, 00:22:41.350 "zone_management": false, 00:22:41.350 "zone_append": false, 00:22:41.350 "compare": false, 00:22:41.350 "compare_and_write": false, 00:22:41.350 "abort": true, 00:22:41.350 "seek_hole": false, 00:22:41.350 "seek_data": false, 00:22:41.350 "copy": true, 00:22:41.350 "nvme_iov_md": false 00:22:41.350 }, 00:22:41.350 "memory_domains": [ 00:22:41.350 { 00:22:41.350 "dma_device_id": "system", 00:22:41.350 "dma_device_type": 1 00:22:41.350 }, 00:22:41.350 { 00:22:41.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.350 "dma_device_type": 2 00:22:41.350 } 00:22:41.350 ], 00:22:41.350 "driver_specific": {} 00:22:41.350 }' 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:41.350 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:41.607 [2024-07-15 09:49:09.576500] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.607 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.864 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.864 "name": "Existed_Raid", 00:22:41.864 "uuid": "7a6893f4-428f-11ef-a0af-c98d8ee52a94", 00:22:41.864 "strip_size_kb": 0, 00:22:41.864 "state": "online", 00:22:41.864 "raid_level": "raid1", 00:22:41.864 "superblock": false, 00:22:41.864 "num_base_bdevs": 3, 00:22:41.864 "num_base_bdevs_discovered": 2, 00:22:41.864 "num_base_bdevs_operational": 2, 00:22:41.864 "base_bdevs_list": [ 00:22:41.864 { 00:22:41.864 "name": null, 00:22:41.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.864 "is_configured": false, 00:22:41.864 "data_offset": 0, 00:22:41.864 "data_size": 65536 00:22:41.864 }, 00:22:41.864 { 00:22:41.864 "name": "BaseBdev2", 00:22:41.864 "uuid": "79c45cdb-428f-11ef-a0af-c98d8ee52a94", 00:22:41.864 "is_configured": true, 00:22:41.864 "data_offset": 0, 00:22:41.864 "data_size": 65536 00:22:41.864 }, 00:22:41.864 { 00:22:41.864 "name": "BaseBdev3", 00:22:41.864 "uuid": "7a688e52-428f-11ef-a0af-c98d8ee52a94", 00:22:41.864 "is_configured": true, 00:22:41.864 "data_offset": 0, 00:22:41.864 "data_size": 65536 00:22:41.864 } 00:22:41.864 ] 00:22:41.864 }' 00:22:41.864 09:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.864 09:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.121 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:42.121 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:42.121 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.121 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:42.381 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:42.381 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.381 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:42.639 [2024-07-15 09:49:10.496762] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.639 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:42.898 [2024-07-15 09:49:10.917291] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:42.898 [2024-07-15 09:49:10.917325] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.898 [2024-07-15 09:49:10.925941] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.898 [2024-07-15 09:49:10.925958] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.898 [2024-07-15 09:49:10.925962] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2da0fa434a00 name Existed_Raid, state offline 00:22:42.898 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:42.898 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:42.898 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.898 09:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:43.184 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:43.184 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:43.184 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:43.184 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:43.184 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:43.184 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:43.443 BaseBdev2 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:43.443 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:43.702 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:43.702 [ 00:22:43.702 { 00:22:43.702 "name": "BaseBdev2", 00:22:43.702 "aliases": [ 00:22:43.702 "7cccf9ac-428f-11ef-a0af-c98d8ee52a94" 00:22:43.702 ], 00:22:43.702 "product_name": "Malloc disk", 00:22:43.702 "block_size": 512, 00:22:43.702 "num_blocks": 65536, 00:22:43.702 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:43.702 "assigned_rate_limits": { 00:22:43.702 "rw_ios_per_sec": 0, 00:22:43.702 "rw_mbytes_per_sec": 0, 00:22:43.702 "r_mbytes_per_sec": 0, 00:22:43.702 "w_mbytes_per_sec": 0 00:22:43.702 }, 00:22:43.702 "claimed": false, 00:22:43.702 "zoned": false, 00:22:43.702 "supported_io_types": { 00:22:43.702 "read": true, 00:22:43.702 "write": true, 00:22:43.702 "unmap": true, 00:22:43.702 "flush": true, 00:22:43.702 "reset": true, 00:22:43.702 "nvme_admin": false, 00:22:43.702 "nvme_io": false, 00:22:43.702 "nvme_io_md": false, 00:22:43.702 "write_zeroes": true, 00:22:43.702 "zcopy": true, 00:22:43.702 "get_zone_info": false, 00:22:43.702 "zone_management": false, 00:22:43.702 "zone_append": false, 00:22:43.702 "compare": false, 00:22:43.702 "compare_and_write": false, 00:22:43.702 "abort": true, 00:22:43.702 "seek_hole": false, 00:22:43.702 "seek_data": false, 00:22:43.702 "copy": true, 00:22:43.702 "nvme_iov_md": false 00:22:43.702 }, 00:22:43.702 "memory_domains": [ 00:22:43.702 { 00:22:43.702 "dma_device_id": "system", 00:22:43.702 "dma_device_type": 1 00:22:43.702 }, 00:22:43.702 { 00:22:43.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.702 "dma_device_type": 2 00:22:43.702 } 00:22:43.702 ], 00:22:43.702 "driver_specific": {} 00:22:43.702 } 00:22:43.702 ] 00:22:43.702 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:43.702 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:43.702 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:43.702 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:43.962 BaseBdev3 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:43.962 09:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.221 09:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:44.480 [ 00:22:44.480 { 00:22:44.480 "name": "BaseBdev3", 00:22:44.480 "aliases": [ 00:22:44.480 "7d2887f4-428f-11ef-a0af-c98d8ee52a94" 00:22:44.480 ], 00:22:44.480 "product_name": "Malloc disk", 00:22:44.480 "block_size": 512, 00:22:44.480 "num_blocks": 65536, 00:22:44.480 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:44.480 "assigned_rate_limits": { 00:22:44.480 "rw_ios_per_sec": 0, 00:22:44.480 "rw_mbytes_per_sec": 0, 00:22:44.480 "r_mbytes_per_sec": 0, 00:22:44.480 "w_mbytes_per_sec": 0 00:22:44.480 }, 00:22:44.480 "claimed": false, 00:22:44.480 "zoned": false, 00:22:44.480 "supported_io_types": { 00:22:44.480 "read": true, 00:22:44.480 "write": true, 00:22:44.480 "unmap": true, 00:22:44.480 "flush": true, 00:22:44.480 "reset": true, 00:22:44.480 "nvme_admin": false, 00:22:44.480 "nvme_io": false, 00:22:44.480 "nvme_io_md": false, 00:22:44.480 "write_zeroes": true, 00:22:44.480 "zcopy": true, 00:22:44.480 "get_zone_info": false, 00:22:44.480 "zone_management": false, 00:22:44.480 "zone_append": false, 00:22:44.480 "compare": false, 00:22:44.480 "compare_and_write": false, 00:22:44.480 "abort": true, 00:22:44.480 "seek_hole": false, 00:22:44.481 "seek_data": false, 00:22:44.481 "copy": true, 00:22:44.481 "nvme_iov_md": false 00:22:44.481 }, 00:22:44.481 "memory_domains": [ 00:22:44.481 { 00:22:44.481 "dma_device_id": "system", 00:22:44.481 "dma_device_type": 1 00:22:44.481 }, 00:22:44.481 { 00:22:44.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.481 "dma_device_type": 2 00:22:44.481 } 00:22:44.481 ], 00:22:44.481 "driver_specific": {} 00:22:44.481 } 00:22:44.481 ] 00:22:44.481 09:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:44.481 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:44.481 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:44.481 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:44.481 [2024-07-15 09:49:12.578012] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.481 [2024-07-15 09:49:12.578072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.481 [2024-07-15 09:49:12.578080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.481 [2024-07-15 09:49:12.578671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.740 "name": "Existed_Raid", 00:22:44.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.740 "strip_size_kb": 0, 00:22:44.740 "state": "configuring", 00:22:44.740 "raid_level": "raid1", 00:22:44.740 "superblock": false, 00:22:44.740 "num_base_bdevs": 3, 00:22:44.740 "num_base_bdevs_discovered": 2, 00:22:44.740 "num_base_bdevs_operational": 3, 00:22:44.740 "base_bdevs_list": [ 00:22:44.740 { 00:22:44.740 "name": "BaseBdev1", 00:22:44.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.740 "is_configured": false, 00:22:44.740 "data_offset": 0, 00:22:44.740 "data_size": 0 00:22:44.740 }, 00:22:44.740 { 00:22:44.740 "name": "BaseBdev2", 00:22:44.740 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:44.740 "is_configured": true, 00:22:44.740 "data_offset": 0, 00:22:44.740 "data_size": 65536 00:22:44.740 }, 00:22:44.740 { 00:22:44.740 "name": "BaseBdev3", 00:22:44.740 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:44.740 "is_configured": true, 00:22:44.740 "data_offset": 0, 00:22:44.740 "data_size": 65536 00:22:44.740 } 00:22:44.740 ] 00:22:44.740 }' 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.740 09:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.998 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:45.258 [2024-07-15 09:49:13.342051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.258 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.517 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.517 "name": "Existed_Raid", 00:22:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.517 "strip_size_kb": 0, 00:22:45.517 "state": "configuring", 00:22:45.517 "raid_level": "raid1", 00:22:45.517 "superblock": false, 00:22:45.517 "num_base_bdevs": 3, 00:22:45.517 "num_base_bdevs_discovered": 1, 00:22:45.517 "num_base_bdevs_operational": 3, 00:22:45.517 "base_bdevs_list": [ 00:22:45.517 { 00:22:45.517 "name": "BaseBdev1", 00:22:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.517 "is_configured": false, 00:22:45.517 "data_offset": 0, 00:22:45.517 "data_size": 0 00:22:45.517 }, 00:22:45.517 { 00:22:45.517 "name": null, 00:22:45.517 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:45.517 "is_configured": false, 00:22:45.517 "data_offset": 0, 00:22:45.517 "data_size": 65536 00:22:45.517 }, 00:22:45.517 { 00:22:45.517 "name": "BaseBdev3", 00:22:45.517 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:45.517 "is_configured": true, 00:22:45.517 "data_offset": 0, 00:22:45.517 "data_size": 65536 00:22:45.517 } 00:22:45.517 ] 00:22:45.517 }' 00:22:45.517 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.517 09:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.776 09:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:46.035 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:46.035 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.294 [2024-07-15 09:49:14.238225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.294 BaseBdev1 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:46.294 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.554 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:46.813 [ 00:22:46.813 { 00:22:46.813 "name": "BaseBdev1", 00:22:46.813 "aliases": [ 00:22:46.813 "7e881e3d-428f-11ef-a0af-c98d8ee52a94" 00:22:46.813 ], 00:22:46.813 "product_name": "Malloc disk", 00:22:46.813 "block_size": 512, 00:22:46.813 "num_blocks": 65536, 00:22:46.813 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:46.813 "assigned_rate_limits": { 00:22:46.813 "rw_ios_per_sec": 0, 00:22:46.813 "rw_mbytes_per_sec": 0, 00:22:46.813 "r_mbytes_per_sec": 0, 00:22:46.813 "w_mbytes_per_sec": 0 00:22:46.813 }, 00:22:46.813 "claimed": true, 00:22:46.813 "claim_type": "exclusive_write", 00:22:46.813 "zoned": false, 00:22:46.813 "supported_io_types": { 00:22:46.813 "read": true, 00:22:46.813 "write": true, 00:22:46.813 "unmap": true, 00:22:46.813 "flush": true, 00:22:46.813 "reset": true, 00:22:46.813 "nvme_admin": false, 00:22:46.813 "nvme_io": false, 00:22:46.813 "nvme_io_md": false, 00:22:46.813 "write_zeroes": true, 00:22:46.813 "zcopy": true, 00:22:46.813 "get_zone_info": false, 00:22:46.813 "zone_management": false, 00:22:46.813 "zone_append": false, 00:22:46.813 "compare": false, 00:22:46.813 "compare_and_write": false, 00:22:46.813 "abort": true, 00:22:46.813 "seek_hole": false, 00:22:46.813 "seek_data": false, 00:22:46.813 "copy": true, 00:22:46.813 "nvme_iov_md": false 00:22:46.813 }, 00:22:46.813 "memory_domains": [ 00:22:46.813 { 00:22:46.813 "dma_device_id": "system", 00:22:46.813 "dma_device_type": 1 00:22:46.813 }, 00:22:46.813 { 00:22:46.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.813 "dma_device_type": 2 00:22:46.813 } 00:22:46.814 ], 00:22:46.814 "driver_specific": {} 00:22:46.814 } 00:22:46.814 ] 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.814 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.074 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.074 "name": "Existed_Raid", 00:22:47.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.074 "strip_size_kb": 0, 00:22:47.074 "state": "configuring", 00:22:47.074 "raid_level": "raid1", 00:22:47.074 "superblock": false, 00:22:47.074 "num_base_bdevs": 3, 00:22:47.074 "num_base_bdevs_discovered": 2, 00:22:47.074 "num_base_bdevs_operational": 3, 00:22:47.074 "base_bdevs_list": [ 00:22:47.074 { 00:22:47.074 "name": "BaseBdev1", 00:22:47.074 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:47.074 "is_configured": true, 00:22:47.074 "data_offset": 0, 00:22:47.074 "data_size": 65536 00:22:47.074 }, 00:22:47.074 { 00:22:47.074 "name": null, 00:22:47.074 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:47.074 "is_configured": false, 00:22:47.074 "data_offset": 0, 00:22:47.074 "data_size": 65536 00:22:47.074 }, 00:22:47.074 { 00:22:47.074 "name": "BaseBdev3", 00:22:47.074 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:47.074 "is_configured": true, 00:22:47.074 "data_offset": 0, 00:22:47.074 "data_size": 65536 00:22:47.074 } 00:22:47.074 ] 00:22:47.074 }' 00:22:47.074 09:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.074 09:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.332 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.332 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:47.590 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:47.590 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:47.849 [2024-07-15 09:49:15.834202] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.849 09:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.110 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.110 "name": "Existed_Raid", 00:22:48.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.110 "strip_size_kb": 0, 00:22:48.110 "state": "configuring", 00:22:48.110 "raid_level": "raid1", 00:22:48.110 "superblock": false, 00:22:48.110 "num_base_bdevs": 3, 00:22:48.110 "num_base_bdevs_discovered": 1, 00:22:48.110 "num_base_bdevs_operational": 3, 00:22:48.110 "base_bdevs_list": [ 00:22:48.110 { 00:22:48.110 "name": "BaseBdev1", 00:22:48.110 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:48.110 "is_configured": true, 00:22:48.110 "data_offset": 0, 00:22:48.110 "data_size": 65536 00:22:48.110 }, 00:22:48.110 { 00:22:48.110 "name": null, 00:22:48.110 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:48.110 "is_configured": false, 00:22:48.110 "data_offset": 0, 00:22:48.110 "data_size": 65536 00:22:48.110 }, 00:22:48.110 { 00:22:48.110 "name": null, 00:22:48.110 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:48.110 "is_configured": false, 00:22:48.110 "data_offset": 0, 00:22:48.110 "data_size": 65536 00:22:48.110 } 00:22:48.110 ] 00:22:48.110 }' 00:22:48.110 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.110 09:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.370 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.370 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:48.629 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:48.629 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:48.889 [2024-07-15 09:49:16.870267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.889 09:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.149 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.149 "name": "Existed_Raid", 00:22:49.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.149 "strip_size_kb": 0, 00:22:49.149 "state": "configuring", 00:22:49.149 "raid_level": "raid1", 00:22:49.149 "superblock": false, 00:22:49.149 "num_base_bdevs": 3, 00:22:49.149 "num_base_bdevs_discovered": 2, 00:22:49.149 "num_base_bdevs_operational": 3, 00:22:49.149 "base_bdevs_list": [ 00:22:49.149 { 00:22:49.149 "name": "BaseBdev1", 00:22:49.149 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:49.149 "is_configured": true, 00:22:49.149 "data_offset": 0, 00:22:49.149 "data_size": 65536 00:22:49.149 }, 00:22:49.149 { 00:22:49.149 "name": null, 00:22:49.149 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:49.149 "is_configured": false, 00:22:49.149 "data_offset": 0, 00:22:49.149 "data_size": 65536 00:22:49.149 }, 00:22:49.149 { 00:22:49.149 "name": "BaseBdev3", 00:22:49.149 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:49.149 "is_configured": true, 00:22:49.149 "data_offset": 0, 00:22:49.149 "data_size": 65536 00:22:49.149 } 00:22:49.149 ] 00:22:49.149 }' 00:22:49.149 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.149 09:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.409 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.409 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:49.668 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:49.668 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:49.926 [2024-07-15 09:49:17.922323] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.926 09:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.185 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.185 "name": "Existed_Raid", 00:22:50.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.185 "strip_size_kb": 0, 00:22:50.185 "state": "configuring", 00:22:50.185 "raid_level": "raid1", 00:22:50.185 "superblock": false, 00:22:50.185 "num_base_bdevs": 3, 00:22:50.185 "num_base_bdevs_discovered": 1, 00:22:50.185 "num_base_bdevs_operational": 3, 00:22:50.185 "base_bdevs_list": [ 00:22:50.185 { 00:22:50.185 "name": null, 00:22:50.185 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:50.185 "is_configured": false, 00:22:50.185 "data_offset": 0, 00:22:50.185 "data_size": 65536 00:22:50.185 }, 00:22:50.185 { 00:22:50.185 "name": null, 00:22:50.185 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:50.185 "is_configured": false, 00:22:50.185 "data_offset": 0, 00:22:50.185 "data_size": 65536 00:22:50.185 }, 00:22:50.185 { 00:22:50.185 "name": "BaseBdev3", 00:22:50.185 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:50.185 "is_configured": true, 00:22:50.185 "data_offset": 0, 00:22:50.185 "data_size": 65536 00:22:50.185 } 00:22:50.185 ] 00:22:50.185 }' 00:22:50.185 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.185 09:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.444 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.444 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:50.704 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:50.704 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:50.963 [2024-07-15 09:49:18.919315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.963 09:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.223 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.223 "name": "Existed_Raid", 00:22:51.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.223 "strip_size_kb": 0, 00:22:51.223 "state": "configuring", 00:22:51.223 "raid_level": "raid1", 00:22:51.223 "superblock": false, 00:22:51.223 "num_base_bdevs": 3, 00:22:51.223 "num_base_bdevs_discovered": 2, 00:22:51.223 "num_base_bdevs_operational": 3, 00:22:51.223 "base_bdevs_list": [ 00:22:51.223 { 00:22:51.223 "name": null, 00:22:51.223 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:51.223 "is_configured": false, 00:22:51.223 "data_offset": 0, 00:22:51.223 "data_size": 65536 00:22:51.223 }, 00:22:51.223 { 00:22:51.223 "name": "BaseBdev2", 00:22:51.223 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:51.223 "is_configured": true, 00:22:51.223 "data_offset": 0, 00:22:51.223 "data_size": 65536 00:22:51.223 }, 00:22:51.223 { 00:22:51.223 "name": "BaseBdev3", 00:22:51.223 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:51.223 "is_configured": true, 00:22:51.223 "data_offset": 0, 00:22:51.223 "data_size": 65536 00:22:51.223 } 00:22:51.223 ] 00:22:51.223 }' 00:22:51.223 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.223 09:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.482 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.482 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:51.740 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:51.740 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.740 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:51.998 09:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7e881e3d-428f-11ef-a0af-c98d8ee52a94 00:22:51.998 [2024-07-15 09:49:20.095511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:51.998 [2024-07-15 09:49:20.095542] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2da0fa434f00 00:22:51.998 [2024-07-15 09:49:20.095546] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:51.998 [2024-07-15 09:49:20.095566] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2da0fa497e20 00:22:51.998 [2024-07-15 09:49:20.095640] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2da0fa434f00 00:22:51.998 [2024-07-15 09:49:20.095643] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2da0fa434f00 00:22:51.998 [2024-07-15 09:49:20.095671] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.998 NewBaseBdev 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.257 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:52.517 [ 00:22:52.517 { 00:22:52.517 "name": "NewBaseBdev", 00:22:52.517 "aliases": [ 00:22:52.517 "7e881e3d-428f-11ef-a0af-c98d8ee52a94" 00:22:52.517 ], 00:22:52.517 "product_name": "Malloc disk", 00:22:52.517 "block_size": 512, 00:22:52.517 "num_blocks": 65536, 00:22:52.517 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:52.517 "assigned_rate_limits": { 00:22:52.517 "rw_ios_per_sec": 0, 00:22:52.517 "rw_mbytes_per_sec": 0, 00:22:52.517 "r_mbytes_per_sec": 0, 00:22:52.517 "w_mbytes_per_sec": 0 00:22:52.517 }, 00:22:52.517 "claimed": true, 00:22:52.517 "claim_type": "exclusive_write", 00:22:52.517 "zoned": false, 00:22:52.517 "supported_io_types": { 00:22:52.517 "read": true, 00:22:52.517 "write": true, 00:22:52.517 "unmap": true, 00:22:52.517 "flush": true, 00:22:52.517 "reset": true, 00:22:52.517 "nvme_admin": false, 00:22:52.517 "nvme_io": false, 00:22:52.517 "nvme_io_md": false, 00:22:52.517 "write_zeroes": true, 00:22:52.517 "zcopy": true, 00:22:52.517 "get_zone_info": false, 00:22:52.517 "zone_management": false, 00:22:52.517 "zone_append": false, 00:22:52.517 "compare": false, 00:22:52.517 "compare_and_write": false, 00:22:52.517 "abort": true, 00:22:52.517 "seek_hole": false, 00:22:52.517 "seek_data": false, 00:22:52.517 "copy": true, 00:22:52.517 "nvme_iov_md": false 00:22:52.517 }, 00:22:52.517 "memory_domains": [ 00:22:52.517 { 00:22:52.517 "dma_device_id": "system", 00:22:52.517 "dma_device_type": 1 00:22:52.517 }, 00:22:52.517 { 00:22:52.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.517 "dma_device_type": 2 00:22:52.517 } 00:22:52.517 ], 00:22:52.517 "driver_specific": {} 00:22:52.517 } 00:22:52.517 ] 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.517 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.776 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.776 "name": "Existed_Raid", 00:22:52.776 "uuid": "8205e481-428f-11ef-a0af-c98d8ee52a94", 00:22:52.776 "strip_size_kb": 0, 00:22:52.776 "state": "online", 00:22:52.776 "raid_level": "raid1", 00:22:52.776 "superblock": false, 00:22:52.776 "num_base_bdevs": 3, 00:22:52.776 "num_base_bdevs_discovered": 3, 00:22:52.776 "num_base_bdevs_operational": 3, 00:22:52.776 "base_bdevs_list": [ 00:22:52.776 { 00:22:52.776 "name": "NewBaseBdev", 00:22:52.776 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:52.776 "is_configured": true, 00:22:52.776 "data_offset": 0, 00:22:52.776 "data_size": 65536 00:22:52.776 }, 00:22:52.776 { 00:22:52.776 "name": "BaseBdev2", 00:22:52.776 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:52.776 "is_configured": true, 00:22:52.776 "data_offset": 0, 00:22:52.776 "data_size": 65536 00:22:52.776 }, 00:22:52.776 { 00:22:52.776 "name": "BaseBdev3", 00:22:52.776 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:52.776 "is_configured": true, 00:22:52.776 "data_offset": 0, 00:22:52.776 "data_size": 65536 00:22:52.776 } 00:22:52.776 ] 00:22:52.776 }' 00:22:52.776 09:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.776 09:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:53.035 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:53.294 [2024-07-15 09:49:21.335546] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.294 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:53.294 "name": "Existed_Raid", 00:22:53.294 "aliases": [ 00:22:53.294 "8205e481-428f-11ef-a0af-c98d8ee52a94" 00:22:53.294 ], 00:22:53.294 "product_name": "Raid Volume", 00:22:53.294 "block_size": 512, 00:22:53.294 "num_blocks": 65536, 00:22:53.294 "uuid": "8205e481-428f-11ef-a0af-c98d8ee52a94", 00:22:53.294 "assigned_rate_limits": { 00:22:53.294 "rw_ios_per_sec": 0, 00:22:53.294 "rw_mbytes_per_sec": 0, 00:22:53.294 "r_mbytes_per_sec": 0, 00:22:53.294 "w_mbytes_per_sec": 0 00:22:53.294 }, 00:22:53.294 "claimed": false, 00:22:53.294 "zoned": false, 00:22:53.294 "supported_io_types": { 00:22:53.294 "read": true, 00:22:53.294 "write": true, 00:22:53.294 "unmap": false, 00:22:53.294 "flush": false, 00:22:53.294 "reset": true, 00:22:53.294 "nvme_admin": false, 00:22:53.294 "nvme_io": false, 00:22:53.294 "nvme_io_md": false, 00:22:53.294 "write_zeroes": true, 00:22:53.294 "zcopy": false, 00:22:53.294 "get_zone_info": false, 00:22:53.294 "zone_management": false, 00:22:53.294 "zone_append": false, 00:22:53.294 "compare": false, 00:22:53.294 "compare_and_write": false, 00:22:53.294 "abort": false, 00:22:53.294 "seek_hole": false, 00:22:53.294 "seek_data": false, 00:22:53.294 "copy": false, 00:22:53.294 "nvme_iov_md": false 00:22:53.294 }, 00:22:53.294 "memory_domains": [ 00:22:53.294 { 00:22:53.294 "dma_device_id": "system", 00:22:53.294 "dma_device_type": 1 00:22:53.294 }, 00:22:53.294 { 00:22:53.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.294 "dma_device_type": 2 00:22:53.294 }, 00:22:53.294 { 00:22:53.294 "dma_device_id": "system", 00:22:53.294 "dma_device_type": 1 00:22:53.294 }, 00:22:53.294 { 00:22:53.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.294 "dma_device_type": 2 00:22:53.294 }, 00:22:53.294 { 00:22:53.294 "dma_device_id": "system", 00:22:53.294 "dma_device_type": 1 00:22:53.294 }, 00:22:53.294 { 00:22:53.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.294 "dma_device_type": 2 00:22:53.294 } 00:22:53.294 ], 00:22:53.294 "driver_specific": { 00:22:53.294 "raid": { 00:22:53.294 "uuid": "8205e481-428f-11ef-a0af-c98d8ee52a94", 00:22:53.294 "strip_size_kb": 0, 00:22:53.294 "state": "online", 00:22:53.294 "raid_level": "raid1", 00:22:53.294 "superblock": false, 00:22:53.294 "num_base_bdevs": 3, 00:22:53.294 "num_base_bdevs_discovered": 3, 00:22:53.294 "num_base_bdevs_operational": 3, 00:22:53.294 "base_bdevs_list": [ 00:22:53.294 { 00:22:53.294 "name": "NewBaseBdev", 00:22:53.294 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:53.294 "is_configured": true, 00:22:53.294 "data_offset": 0, 00:22:53.294 "data_size": 65536 00:22:53.294 }, 00:22:53.294 { 00:22:53.294 "name": "BaseBdev2", 00:22:53.295 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:53.295 "is_configured": true, 00:22:53.295 "data_offset": 0, 00:22:53.295 "data_size": 65536 00:22:53.295 }, 00:22:53.295 { 00:22:53.295 "name": "BaseBdev3", 00:22:53.295 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:53.295 "is_configured": true, 00:22:53.295 "data_offset": 0, 00:22:53.295 "data_size": 65536 00:22:53.295 } 00:22:53.295 ] 00:22:53.295 } 00:22:53.295 } 00:22:53.295 }' 00:22:53.295 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:53.295 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:53.295 BaseBdev2 00:22:53.295 BaseBdev3' 00:22:53.295 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.295 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:53.295 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.554 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.554 "name": "NewBaseBdev", 00:22:53.554 "aliases": [ 00:22:53.554 "7e881e3d-428f-11ef-a0af-c98d8ee52a94" 00:22:53.554 ], 00:22:53.554 "product_name": "Malloc disk", 00:22:53.554 "block_size": 512, 00:22:53.554 "num_blocks": 65536, 00:22:53.554 "uuid": "7e881e3d-428f-11ef-a0af-c98d8ee52a94", 00:22:53.554 "assigned_rate_limits": { 00:22:53.554 "rw_ios_per_sec": 0, 00:22:53.554 "rw_mbytes_per_sec": 0, 00:22:53.554 "r_mbytes_per_sec": 0, 00:22:53.554 "w_mbytes_per_sec": 0 00:22:53.554 }, 00:22:53.554 "claimed": true, 00:22:53.554 "claim_type": "exclusive_write", 00:22:53.554 "zoned": false, 00:22:53.554 "supported_io_types": { 00:22:53.554 "read": true, 00:22:53.554 "write": true, 00:22:53.554 "unmap": true, 00:22:53.554 "flush": true, 00:22:53.554 "reset": true, 00:22:53.554 "nvme_admin": false, 00:22:53.554 "nvme_io": false, 00:22:53.554 "nvme_io_md": false, 00:22:53.554 "write_zeroes": true, 00:22:53.554 "zcopy": true, 00:22:53.554 "get_zone_info": false, 00:22:53.554 "zone_management": false, 00:22:53.554 "zone_append": false, 00:22:53.554 "compare": false, 00:22:53.554 "compare_and_write": false, 00:22:53.554 "abort": true, 00:22:53.554 "seek_hole": false, 00:22:53.554 "seek_data": false, 00:22:53.554 "copy": true, 00:22:53.554 "nvme_iov_md": false 00:22:53.554 }, 00:22:53.554 "memory_domains": [ 00:22:53.554 { 00:22:53.554 "dma_device_id": "system", 00:22:53.554 "dma_device_type": 1 00:22:53.554 }, 00:22:53.554 { 00:22:53.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.554 "dma_device_type": 2 00:22:53.554 } 00:22:53.554 ], 00:22:53.554 "driver_specific": {} 00:22:53.554 }' 00:22:53.554 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.554 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:53.813 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.071 "name": "BaseBdev2", 00:22:54.071 "aliases": [ 00:22:54.071 "7cccf9ac-428f-11ef-a0af-c98d8ee52a94" 00:22:54.071 ], 00:22:54.071 "product_name": "Malloc disk", 00:22:54.071 "block_size": 512, 00:22:54.071 "num_blocks": 65536, 00:22:54.071 "uuid": "7cccf9ac-428f-11ef-a0af-c98d8ee52a94", 00:22:54.071 "assigned_rate_limits": { 00:22:54.071 "rw_ios_per_sec": 0, 00:22:54.071 "rw_mbytes_per_sec": 0, 00:22:54.071 "r_mbytes_per_sec": 0, 00:22:54.071 "w_mbytes_per_sec": 0 00:22:54.071 }, 00:22:54.071 "claimed": true, 00:22:54.071 "claim_type": "exclusive_write", 00:22:54.071 "zoned": false, 00:22:54.071 "supported_io_types": { 00:22:54.071 "read": true, 00:22:54.071 "write": true, 00:22:54.071 "unmap": true, 00:22:54.071 "flush": true, 00:22:54.071 "reset": true, 00:22:54.071 "nvme_admin": false, 00:22:54.071 "nvme_io": false, 00:22:54.071 "nvme_io_md": false, 00:22:54.071 "write_zeroes": true, 00:22:54.071 "zcopy": true, 00:22:54.071 "get_zone_info": false, 00:22:54.071 "zone_management": false, 00:22:54.071 "zone_append": false, 00:22:54.071 "compare": false, 00:22:54.071 "compare_and_write": false, 00:22:54.071 "abort": true, 00:22:54.071 "seek_hole": false, 00:22:54.071 "seek_data": false, 00:22:54.071 "copy": true, 00:22:54.071 "nvme_iov_md": false 00:22:54.071 }, 00:22:54.071 "memory_domains": [ 00:22:54.071 { 00:22:54.071 "dma_device_id": "system", 00:22:54.071 "dma_device_type": 1 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.071 "dma_device_type": 2 00:22:54.071 } 00:22:54.071 ], 00:22:54.071 "driver_specific": {} 00:22:54.071 }' 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.071 09:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:54.071 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.329 "name": "BaseBdev3", 00:22:54.329 "aliases": [ 00:22:54.329 "7d2887f4-428f-11ef-a0af-c98d8ee52a94" 00:22:54.329 ], 00:22:54.329 "product_name": "Malloc disk", 00:22:54.329 "block_size": 512, 00:22:54.329 "num_blocks": 65536, 00:22:54.329 "uuid": "7d2887f4-428f-11ef-a0af-c98d8ee52a94", 00:22:54.329 "assigned_rate_limits": { 00:22:54.329 "rw_ios_per_sec": 0, 00:22:54.329 "rw_mbytes_per_sec": 0, 00:22:54.329 "r_mbytes_per_sec": 0, 00:22:54.329 "w_mbytes_per_sec": 0 00:22:54.329 }, 00:22:54.329 "claimed": true, 00:22:54.329 "claim_type": "exclusive_write", 00:22:54.329 "zoned": false, 00:22:54.329 "supported_io_types": { 00:22:54.329 "read": true, 00:22:54.329 "write": true, 00:22:54.329 "unmap": true, 00:22:54.329 "flush": true, 00:22:54.329 "reset": true, 00:22:54.329 "nvme_admin": false, 00:22:54.329 "nvme_io": false, 00:22:54.329 "nvme_io_md": false, 00:22:54.329 "write_zeroes": true, 00:22:54.329 "zcopy": true, 00:22:54.329 "get_zone_info": false, 00:22:54.329 "zone_management": false, 00:22:54.329 "zone_append": false, 00:22:54.329 "compare": false, 00:22:54.329 "compare_and_write": false, 00:22:54.329 "abort": true, 00:22:54.329 "seek_hole": false, 00:22:54.329 "seek_data": false, 00:22:54.329 "copy": true, 00:22:54.329 "nvme_iov_md": false 00:22:54.329 }, 00:22:54.329 "memory_domains": [ 00:22:54.329 { 00:22:54.329 "dma_device_id": "system", 00:22:54.329 "dma_device_type": 1 00:22:54.329 }, 00:22:54.329 { 00:22:54.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.329 "dma_device_type": 2 00:22:54.329 } 00:22:54.329 ], 00:22:54.329 "driver_specific": {} 00:22:54.329 }' 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.329 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:54.586 [2024-07-15 09:49:22.675563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:54.586 [2024-07-15 09:49:22.675599] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.586 [2024-07-15 09:49:22.675627] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.586 [2024-07-15 09:49:22.675738] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.586 [2024-07-15 09:49:22.675743] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2da0fa434f00 name Existed_Raid, state offline 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 55992 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 55992 ']' 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 55992 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 55992 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:22:54.843 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55992' 00:22:54.843 killing process with pid 55992 00:22:54.844 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 55992 00:22:54.844 [2024-07-15 09:49:22.710549] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.844 09:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 55992 00:22:54.844 [2024-07-15 09:49:22.738810] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:55.103 00:22:55.103 real 0m21.181s 00:22:55.103 user 0m37.785s 00:22:55.103 sys 0m3.808s 00:22:55.103 ************************************ 00:22:55.103 END TEST raid_state_function_test 00:22:55.103 ************************************ 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.103 09:49:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:55.103 09:49:23 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:22:55.103 09:49:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:55.103 09:49:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.103 09:49:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:55.103 ************************************ 00:22:55.103 START TEST raid_state_function_test_sb 00:22:55.103 ************************************ 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:55.103 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56713 00:22:55.104 Process raid pid: 56713 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56713' 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56713 /var/tmp/spdk-raid.sock 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56713 ']' 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.104 09:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.104 [2024-07-15 09:49:23.083620] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:55.104 [2024-07-15 09:49:23.083947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:22:56.037 EAL: TSC is not safe to use in SMP mode 00:22:56.037 EAL: TSC is not invariant 00:22:56.037 [2024-07-15 09:49:23.800393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.037 [2024-07-15 09:49:23.908693] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:56.037 [2024-07-15 09:49:23.911218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.037 [2024-07-15 09:49:23.911948] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.037 [2024-07-15 09:49:23.911961] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.297 09:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.297 09:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:56.297 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:56.297 [2024-07-15 09:49:24.383026] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.297 [2024-07-15 09:49:24.383091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.297 [2024-07-15 09:49:24.383096] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:56.297 [2024-07-15 09:49:24.383103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:56.297 [2024-07-15 09:49:24.383106] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:56.297 [2024-07-15 09:49:24.383113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.556 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.557 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.557 "name": "Existed_Raid", 00:22:56.557 "uuid": "84941ba9-428f-11ef-a0af-c98d8ee52a94", 00:22:56.557 "strip_size_kb": 0, 00:22:56.557 "state": "configuring", 00:22:56.557 "raid_level": "raid1", 00:22:56.557 "superblock": true, 00:22:56.557 "num_base_bdevs": 3, 00:22:56.557 "num_base_bdevs_discovered": 0, 00:22:56.557 "num_base_bdevs_operational": 3, 00:22:56.557 "base_bdevs_list": [ 00:22:56.557 { 00:22:56.557 "name": "BaseBdev1", 00:22:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.557 "is_configured": false, 00:22:56.557 "data_offset": 0, 00:22:56.557 "data_size": 0 00:22:56.557 }, 00:22:56.557 { 00:22:56.557 "name": "BaseBdev2", 00:22:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.557 "is_configured": false, 00:22:56.557 "data_offset": 0, 00:22:56.557 "data_size": 0 00:22:56.557 }, 00:22:56.557 { 00:22:56.557 "name": "BaseBdev3", 00:22:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.557 "is_configured": false, 00:22:56.557 "data_offset": 0, 00:22:56.557 "data_size": 0 00:22:56.557 } 00:22:56.557 ] 00:22:56.557 }' 00:22:56.557 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.557 09:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 09:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:57.072 [2024-07-15 09:49:25.139040] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:57.072 [2024-07-15 09:49:25.139071] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12de80c34500 name Existed_Raid, state configuring 00:22:57.072 09:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:57.329 [2024-07-15 09:49:25.343065] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:57.329 [2024-07-15 09:49:25.343117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:57.329 [2024-07-15 09:49:25.343121] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:57.329 [2024-07-15 09:49:25.343128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:57.329 [2024-07-15 09:49:25.343132] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:57.329 [2024-07-15 09:49:25.343138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:57.329 09:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:57.586 [2024-07-15 09:49:25.588292] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.586 BaseBdev1 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:57.586 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:58.152 09:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:58.152 [ 00:22:58.152 { 00:22:58.152 "name": "BaseBdev1", 00:22:58.152 "aliases": [ 00:22:58.152 "854bd619-428f-11ef-a0af-c98d8ee52a94" 00:22:58.152 ], 00:22:58.152 "product_name": "Malloc disk", 00:22:58.152 "block_size": 512, 00:22:58.152 "num_blocks": 65536, 00:22:58.152 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:22:58.152 "assigned_rate_limits": { 00:22:58.152 "rw_ios_per_sec": 0, 00:22:58.152 "rw_mbytes_per_sec": 0, 00:22:58.152 "r_mbytes_per_sec": 0, 00:22:58.152 "w_mbytes_per_sec": 0 00:22:58.152 }, 00:22:58.152 "claimed": true, 00:22:58.152 "claim_type": "exclusive_write", 00:22:58.152 "zoned": false, 00:22:58.152 "supported_io_types": { 00:22:58.152 "read": true, 00:22:58.152 "write": true, 00:22:58.152 "unmap": true, 00:22:58.152 "flush": true, 00:22:58.152 "reset": true, 00:22:58.152 "nvme_admin": false, 00:22:58.152 "nvme_io": false, 00:22:58.152 "nvme_io_md": false, 00:22:58.152 "write_zeroes": true, 00:22:58.152 "zcopy": true, 00:22:58.152 "get_zone_info": false, 00:22:58.152 "zone_management": false, 00:22:58.152 "zone_append": false, 00:22:58.152 "compare": false, 00:22:58.152 "compare_and_write": false, 00:22:58.152 "abort": true, 00:22:58.152 "seek_hole": false, 00:22:58.152 "seek_data": false, 00:22:58.152 "copy": true, 00:22:58.152 "nvme_iov_md": false 00:22:58.152 }, 00:22:58.152 "memory_domains": [ 00:22:58.152 { 00:22:58.152 "dma_device_id": "system", 00:22:58.152 "dma_device_type": 1 00:22:58.152 }, 00:22:58.152 { 00:22:58.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.152 "dma_device_type": 2 00:22:58.152 } 00:22:58.152 ], 00:22:58.152 "driver_specific": {} 00:22:58.152 } 00:22:58.152 ] 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.152 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.410 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:58.410 "name": "Existed_Raid", 00:22:58.410 "uuid": "85269940-428f-11ef-a0af-c98d8ee52a94", 00:22:58.410 "strip_size_kb": 0, 00:22:58.410 "state": "configuring", 00:22:58.410 "raid_level": "raid1", 00:22:58.410 "superblock": true, 00:22:58.410 "num_base_bdevs": 3, 00:22:58.410 "num_base_bdevs_discovered": 1, 00:22:58.410 "num_base_bdevs_operational": 3, 00:22:58.410 "base_bdevs_list": [ 00:22:58.410 { 00:22:58.410 "name": "BaseBdev1", 00:22:58.410 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:22:58.410 "is_configured": true, 00:22:58.410 "data_offset": 2048, 00:22:58.410 "data_size": 63488 00:22:58.410 }, 00:22:58.410 { 00:22:58.410 "name": "BaseBdev2", 00:22:58.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.410 "is_configured": false, 00:22:58.410 "data_offset": 0, 00:22:58.410 "data_size": 0 00:22:58.410 }, 00:22:58.410 { 00:22:58.410 "name": "BaseBdev3", 00:22:58.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.410 "is_configured": false, 00:22:58.410 "data_offset": 0, 00:22:58.410 "data_size": 0 00:22:58.410 } 00:22:58.410 ] 00:22:58.410 }' 00:22:58.410 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:58.410 09:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.668 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:58.926 [2024-07-15 09:49:26.967234] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:58.926 [2024-07-15 09:49:26.967279] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12de80c34500 name Existed_Raid, state configuring 00:22:58.926 09:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:59.184 [2024-07-15 09:49:27.183263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.184 [2024-07-15 09:49:27.184195] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:59.184 [2024-07-15 09:49:27.184246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:59.184 [2024-07-15 09:49:27.184264] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:59.184 [2024-07-15 09:49:27.184273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.184 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.442 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.442 "name": "Existed_Raid", 00:22:59.442 "uuid": "863f639b-428f-11ef-a0af-c98d8ee52a94", 00:22:59.442 "strip_size_kb": 0, 00:22:59.442 "state": "configuring", 00:22:59.442 "raid_level": "raid1", 00:22:59.442 "superblock": true, 00:22:59.442 "num_base_bdevs": 3, 00:22:59.442 "num_base_bdevs_discovered": 1, 00:22:59.442 "num_base_bdevs_operational": 3, 00:22:59.442 "base_bdevs_list": [ 00:22:59.442 { 00:22:59.442 "name": "BaseBdev1", 00:22:59.442 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:22:59.442 "is_configured": true, 00:22:59.442 "data_offset": 2048, 00:22:59.442 "data_size": 63488 00:22:59.442 }, 00:22:59.442 { 00:22:59.442 "name": "BaseBdev2", 00:22:59.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.442 "is_configured": false, 00:22:59.442 "data_offset": 0, 00:22:59.442 "data_size": 0 00:22:59.442 }, 00:22:59.442 { 00:22:59.442 "name": "BaseBdev3", 00:22:59.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.442 "is_configured": false, 00:22:59.442 "data_offset": 0, 00:22:59.442 "data_size": 0 00:22:59.442 } 00:22:59.442 ] 00:22:59.442 }' 00:22:59.442 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.442 09:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.032 09:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.032 [2024-07-15 09:49:28.071497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.032 BaseBdev2 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:00.032 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.289 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:00.547 [ 00:23:00.547 { 00:23:00.547 "name": "BaseBdev2", 00:23:00.547 "aliases": [ 00:23:00.547 "86c6e6c7-428f-11ef-a0af-c98d8ee52a94" 00:23:00.547 ], 00:23:00.547 "product_name": "Malloc disk", 00:23:00.547 "block_size": 512, 00:23:00.547 "num_blocks": 65536, 00:23:00.547 "uuid": "86c6e6c7-428f-11ef-a0af-c98d8ee52a94", 00:23:00.547 "assigned_rate_limits": { 00:23:00.547 "rw_ios_per_sec": 0, 00:23:00.547 "rw_mbytes_per_sec": 0, 00:23:00.547 "r_mbytes_per_sec": 0, 00:23:00.547 "w_mbytes_per_sec": 0 00:23:00.547 }, 00:23:00.547 "claimed": true, 00:23:00.547 "claim_type": "exclusive_write", 00:23:00.547 "zoned": false, 00:23:00.547 "supported_io_types": { 00:23:00.547 "read": true, 00:23:00.547 "write": true, 00:23:00.547 "unmap": true, 00:23:00.547 "flush": true, 00:23:00.547 "reset": true, 00:23:00.547 "nvme_admin": false, 00:23:00.547 "nvme_io": false, 00:23:00.547 "nvme_io_md": false, 00:23:00.547 "write_zeroes": true, 00:23:00.547 "zcopy": true, 00:23:00.547 "get_zone_info": false, 00:23:00.547 "zone_management": false, 00:23:00.547 "zone_append": false, 00:23:00.547 "compare": false, 00:23:00.547 "compare_and_write": false, 00:23:00.547 "abort": true, 00:23:00.547 "seek_hole": false, 00:23:00.547 "seek_data": false, 00:23:00.547 "copy": true, 00:23:00.547 "nvme_iov_md": false 00:23:00.547 }, 00:23:00.547 "memory_domains": [ 00:23:00.547 { 00:23:00.547 "dma_device_id": "system", 00:23:00.547 "dma_device_type": 1 00:23:00.547 }, 00:23:00.547 { 00:23:00.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.547 "dma_device_type": 2 00:23:00.547 } 00:23:00.547 ], 00:23:00.547 "driver_specific": {} 00:23:00.547 } 00:23:00.547 ] 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.547 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.806 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.806 "name": "Existed_Raid", 00:23:00.806 "uuid": "863f639b-428f-11ef-a0af-c98d8ee52a94", 00:23:00.806 "strip_size_kb": 0, 00:23:00.806 "state": "configuring", 00:23:00.806 "raid_level": "raid1", 00:23:00.806 "superblock": true, 00:23:00.806 "num_base_bdevs": 3, 00:23:00.806 "num_base_bdevs_discovered": 2, 00:23:00.806 "num_base_bdevs_operational": 3, 00:23:00.806 "base_bdevs_list": [ 00:23:00.806 { 00:23:00.806 "name": "BaseBdev1", 00:23:00.806 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:23:00.806 "is_configured": true, 00:23:00.806 "data_offset": 2048, 00:23:00.806 "data_size": 63488 00:23:00.806 }, 00:23:00.806 { 00:23:00.806 "name": "BaseBdev2", 00:23:00.806 "uuid": "86c6e6c7-428f-11ef-a0af-c98d8ee52a94", 00:23:00.806 "is_configured": true, 00:23:00.806 "data_offset": 2048, 00:23:00.806 "data_size": 63488 00:23:00.806 }, 00:23:00.806 { 00:23:00.806 "name": "BaseBdev3", 00:23:00.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.806 "is_configured": false, 00:23:00.806 "data_offset": 0, 00:23:00.806 "data_size": 0 00:23:00.806 } 00:23:00.806 ] 00:23:00.806 }' 00:23:00.806 09:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.806 09:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.064 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:01.323 [2024-07-15 09:49:29.315612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:01.323 [2024-07-15 09:49:29.315687] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12de80c34a00 00:23:01.323 [2024-07-15 09:49:29.315710] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:01.323 [2024-07-15 09:49:29.315729] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12de80c97e20 00:23:01.323 [2024-07-15 09:49:29.315776] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12de80c34a00 00:23:01.323 [2024-07-15 09:49:29.315780] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x12de80c34a00 00:23:01.323 [2024-07-15 09:49:29.315797] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.323 BaseBdev3 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:01.323 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.583 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:01.842 [ 00:23:01.842 { 00:23:01.842 "name": "BaseBdev3", 00:23:01.842 "aliases": [ 00:23:01.842 "8784bd89-428f-11ef-a0af-c98d8ee52a94" 00:23:01.842 ], 00:23:01.842 "product_name": "Malloc disk", 00:23:01.842 "block_size": 512, 00:23:01.842 "num_blocks": 65536, 00:23:01.842 "uuid": "8784bd89-428f-11ef-a0af-c98d8ee52a94", 00:23:01.842 "assigned_rate_limits": { 00:23:01.842 "rw_ios_per_sec": 0, 00:23:01.842 "rw_mbytes_per_sec": 0, 00:23:01.842 "r_mbytes_per_sec": 0, 00:23:01.842 "w_mbytes_per_sec": 0 00:23:01.842 }, 00:23:01.842 "claimed": true, 00:23:01.842 "claim_type": "exclusive_write", 00:23:01.842 "zoned": false, 00:23:01.842 "supported_io_types": { 00:23:01.842 "read": true, 00:23:01.842 "write": true, 00:23:01.842 "unmap": true, 00:23:01.842 "flush": true, 00:23:01.842 "reset": true, 00:23:01.842 "nvme_admin": false, 00:23:01.842 "nvme_io": false, 00:23:01.842 "nvme_io_md": false, 00:23:01.842 "write_zeroes": true, 00:23:01.842 "zcopy": true, 00:23:01.842 "get_zone_info": false, 00:23:01.842 "zone_management": false, 00:23:01.842 "zone_append": false, 00:23:01.842 "compare": false, 00:23:01.842 "compare_and_write": false, 00:23:01.842 "abort": true, 00:23:01.842 "seek_hole": false, 00:23:01.842 "seek_data": false, 00:23:01.842 "copy": true, 00:23:01.842 "nvme_iov_md": false 00:23:01.842 }, 00:23:01.842 "memory_domains": [ 00:23:01.842 { 00:23:01.842 "dma_device_id": "system", 00:23:01.842 "dma_device_type": 1 00:23:01.842 }, 00:23:01.842 { 00:23:01.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.842 "dma_device_type": 2 00:23:01.842 } 00:23:01.842 ], 00:23:01.842 "driver_specific": {} 00:23:01.842 } 00:23:01.842 ] 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.842 09:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.102 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.102 "name": "Existed_Raid", 00:23:02.102 "uuid": "863f639b-428f-11ef-a0af-c98d8ee52a94", 00:23:02.102 "strip_size_kb": 0, 00:23:02.102 "state": "online", 00:23:02.102 "raid_level": "raid1", 00:23:02.102 "superblock": true, 00:23:02.102 "num_base_bdevs": 3, 00:23:02.102 "num_base_bdevs_discovered": 3, 00:23:02.102 "num_base_bdevs_operational": 3, 00:23:02.102 "base_bdevs_list": [ 00:23:02.102 { 00:23:02.102 "name": "BaseBdev1", 00:23:02.102 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:23:02.102 "is_configured": true, 00:23:02.102 "data_offset": 2048, 00:23:02.102 "data_size": 63488 00:23:02.102 }, 00:23:02.102 { 00:23:02.102 "name": "BaseBdev2", 00:23:02.102 "uuid": "86c6e6c7-428f-11ef-a0af-c98d8ee52a94", 00:23:02.102 "is_configured": true, 00:23:02.102 "data_offset": 2048, 00:23:02.102 "data_size": 63488 00:23:02.102 }, 00:23:02.102 { 00:23:02.102 "name": "BaseBdev3", 00:23:02.102 "uuid": "8784bd89-428f-11ef-a0af-c98d8ee52a94", 00:23:02.102 "is_configured": true, 00:23:02.102 "data_offset": 2048, 00:23:02.102 "data_size": 63488 00:23:02.102 } 00:23:02.102 ] 00:23:02.102 }' 00:23:02.102 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.102 09:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:02.361 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:02.622 [2024-07-15 09:49:30.623556] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.622 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:02.622 "name": "Existed_Raid", 00:23:02.622 "aliases": [ 00:23:02.622 "863f639b-428f-11ef-a0af-c98d8ee52a94" 00:23:02.622 ], 00:23:02.622 "product_name": "Raid Volume", 00:23:02.622 "block_size": 512, 00:23:02.622 "num_blocks": 63488, 00:23:02.622 "uuid": "863f639b-428f-11ef-a0af-c98d8ee52a94", 00:23:02.622 "assigned_rate_limits": { 00:23:02.622 "rw_ios_per_sec": 0, 00:23:02.622 "rw_mbytes_per_sec": 0, 00:23:02.622 "r_mbytes_per_sec": 0, 00:23:02.622 "w_mbytes_per_sec": 0 00:23:02.622 }, 00:23:02.622 "claimed": false, 00:23:02.622 "zoned": false, 00:23:02.622 "supported_io_types": { 00:23:02.622 "read": true, 00:23:02.622 "write": true, 00:23:02.622 "unmap": false, 00:23:02.622 "flush": false, 00:23:02.622 "reset": true, 00:23:02.622 "nvme_admin": false, 00:23:02.622 "nvme_io": false, 00:23:02.622 "nvme_io_md": false, 00:23:02.622 "write_zeroes": true, 00:23:02.622 "zcopy": false, 00:23:02.622 "get_zone_info": false, 00:23:02.622 "zone_management": false, 00:23:02.622 "zone_append": false, 00:23:02.622 "compare": false, 00:23:02.622 "compare_and_write": false, 00:23:02.622 "abort": false, 00:23:02.622 "seek_hole": false, 00:23:02.622 "seek_data": false, 00:23:02.622 "copy": false, 00:23:02.622 "nvme_iov_md": false 00:23:02.622 }, 00:23:02.622 "memory_domains": [ 00:23:02.622 { 00:23:02.622 "dma_device_id": "system", 00:23:02.622 "dma_device_type": 1 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.622 "dma_device_type": 2 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "dma_device_id": "system", 00:23:02.622 "dma_device_type": 1 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.622 "dma_device_type": 2 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "dma_device_id": "system", 00:23:02.622 "dma_device_type": 1 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.622 "dma_device_type": 2 00:23:02.622 } 00:23:02.622 ], 00:23:02.622 "driver_specific": { 00:23:02.622 "raid": { 00:23:02.622 "uuid": "863f639b-428f-11ef-a0af-c98d8ee52a94", 00:23:02.622 "strip_size_kb": 0, 00:23:02.622 "state": "online", 00:23:02.622 "raid_level": "raid1", 00:23:02.622 "superblock": true, 00:23:02.622 "num_base_bdevs": 3, 00:23:02.622 "num_base_bdevs_discovered": 3, 00:23:02.622 "num_base_bdevs_operational": 3, 00:23:02.622 "base_bdevs_list": [ 00:23:02.622 { 00:23:02.622 "name": "BaseBdev1", 00:23:02.622 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:23:02.622 "is_configured": true, 00:23:02.622 "data_offset": 2048, 00:23:02.622 "data_size": 63488 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "name": "BaseBdev2", 00:23:02.622 "uuid": "86c6e6c7-428f-11ef-a0af-c98d8ee52a94", 00:23:02.622 "is_configured": true, 00:23:02.622 "data_offset": 2048, 00:23:02.622 "data_size": 63488 00:23:02.622 }, 00:23:02.622 { 00:23:02.622 "name": "BaseBdev3", 00:23:02.622 "uuid": "8784bd89-428f-11ef-a0af-c98d8ee52a94", 00:23:02.622 "is_configured": true, 00:23:02.622 "data_offset": 2048, 00:23:02.622 "data_size": 63488 00:23:02.622 } 00:23:02.622 ] 00:23:02.622 } 00:23:02.622 } 00:23:02.622 }' 00:23:02.622 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:02.622 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:02.622 BaseBdev2 00:23:02.622 BaseBdev3' 00:23:02.622 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.622 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.622 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.921 "name": "BaseBdev1", 00:23:02.921 "aliases": [ 00:23:02.921 "854bd619-428f-11ef-a0af-c98d8ee52a94" 00:23:02.921 ], 00:23:02.921 "product_name": "Malloc disk", 00:23:02.921 "block_size": 512, 00:23:02.921 "num_blocks": 65536, 00:23:02.921 "uuid": "854bd619-428f-11ef-a0af-c98d8ee52a94", 00:23:02.921 "assigned_rate_limits": { 00:23:02.921 "rw_ios_per_sec": 0, 00:23:02.921 "rw_mbytes_per_sec": 0, 00:23:02.921 "r_mbytes_per_sec": 0, 00:23:02.921 "w_mbytes_per_sec": 0 00:23:02.921 }, 00:23:02.921 "claimed": true, 00:23:02.921 "claim_type": "exclusive_write", 00:23:02.921 "zoned": false, 00:23:02.921 "supported_io_types": { 00:23:02.921 "read": true, 00:23:02.921 "write": true, 00:23:02.921 "unmap": true, 00:23:02.921 "flush": true, 00:23:02.921 "reset": true, 00:23:02.921 "nvme_admin": false, 00:23:02.921 "nvme_io": false, 00:23:02.921 "nvme_io_md": false, 00:23:02.921 "write_zeroes": true, 00:23:02.921 "zcopy": true, 00:23:02.921 "get_zone_info": false, 00:23:02.921 "zone_management": false, 00:23:02.921 "zone_append": false, 00:23:02.921 "compare": false, 00:23:02.921 "compare_and_write": false, 00:23:02.921 "abort": true, 00:23:02.921 "seek_hole": false, 00:23:02.921 "seek_data": false, 00:23:02.921 "copy": true, 00:23:02.921 "nvme_iov_md": false 00:23:02.921 }, 00:23:02.921 "memory_domains": [ 00:23:02.921 { 00:23:02.921 "dma_device_id": "system", 00:23:02.921 "dma_device_type": 1 00:23:02.921 }, 00:23:02.921 { 00:23:02.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.921 "dma_device_type": 2 00:23:02.921 } 00:23:02.921 ], 00:23:02.921 "driver_specific": {} 00:23:02.921 }' 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:02.921 09:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.180 "name": "BaseBdev2", 00:23:03.180 "aliases": [ 00:23:03.180 "86c6e6c7-428f-11ef-a0af-c98d8ee52a94" 00:23:03.180 ], 00:23:03.180 "product_name": "Malloc disk", 00:23:03.180 "block_size": 512, 00:23:03.180 "num_blocks": 65536, 00:23:03.180 "uuid": "86c6e6c7-428f-11ef-a0af-c98d8ee52a94", 00:23:03.180 "assigned_rate_limits": { 00:23:03.180 "rw_ios_per_sec": 0, 00:23:03.180 "rw_mbytes_per_sec": 0, 00:23:03.180 "r_mbytes_per_sec": 0, 00:23:03.180 "w_mbytes_per_sec": 0 00:23:03.180 }, 00:23:03.180 "claimed": true, 00:23:03.180 "claim_type": "exclusive_write", 00:23:03.180 "zoned": false, 00:23:03.180 "supported_io_types": { 00:23:03.180 "read": true, 00:23:03.180 "write": true, 00:23:03.180 "unmap": true, 00:23:03.180 "flush": true, 00:23:03.180 "reset": true, 00:23:03.180 "nvme_admin": false, 00:23:03.180 "nvme_io": false, 00:23:03.180 "nvme_io_md": false, 00:23:03.180 "write_zeroes": true, 00:23:03.180 "zcopy": true, 00:23:03.180 "get_zone_info": false, 00:23:03.180 "zone_management": false, 00:23:03.180 "zone_append": false, 00:23:03.180 "compare": false, 00:23:03.180 "compare_and_write": false, 00:23:03.180 "abort": true, 00:23:03.180 "seek_hole": false, 00:23:03.180 "seek_data": false, 00:23:03.180 "copy": true, 00:23:03.180 "nvme_iov_md": false 00:23:03.180 }, 00:23:03.180 "memory_domains": [ 00:23:03.180 { 00:23:03.180 "dma_device_id": "system", 00:23:03.180 "dma_device_type": 1 00:23:03.180 }, 00:23:03.180 { 00:23:03.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.180 "dma_device_type": 2 00:23:03.180 } 00:23:03.180 ], 00:23:03.180 "driver_specific": {} 00:23:03.180 }' 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.180 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.460 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.460 "name": "BaseBdev3", 00:23:03.460 "aliases": [ 00:23:03.460 "8784bd89-428f-11ef-a0af-c98d8ee52a94" 00:23:03.460 ], 00:23:03.460 "product_name": "Malloc disk", 00:23:03.460 "block_size": 512, 00:23:03.460 "num_blocks": 65536, 00:23:03.460 "uuid": "8784bd89-428f-11ef-a0af-c98d8ee52a94", 00:23:03.460 "assigned_rate_limits": { 00:23:03.460 "rw_ios_per_sec": 0, 00:23:03.460 "rw_mbytes_per_sec": 0, 00:23:03.461 "r_mbytes_per_sec": 0, 00:23:03.461 "w_mbytes_per_sec": 0 00:23:03.461 }, 00:23:03.461 "claimed": true, 00:23:03.461 "claim_type": "exclusive_write", 00:23:03.461 "zoned": false, 00:23:03.461 "supported_io_types": { 00:23:03.461 "read": true, 00:23:03.461 "write": true, 00:23:03.461 "unmap": true, 00:23:03.461 "flush": true, 00:23:03.461 "reset": true, 00:23:03.461 "nvme_admin": false, 00:23:03.461 "nvme_io": false, 00:23:03.461 "nvme_io_md": false, 00:23:03.461 "write_zeroes": true, 00:23:03.461 "zcopy": true, 00:23:03.461 "get_zone_info": false, 00:23:03.461 "zone_management": false, 00:23:03.461 "zone_append": false, 00:23:03.461 "compare": false, 00:23:03.461 "compare_and_write": false, 00:23:03.461 "abort": true, 00:23:03.461 "seek_hole": false, 00:23:03.461 "seek_data": false, 00:23:03.461 "copy": true, 00:23:03.461 "nvme_iov_md": false 00:23:03.461 }, 00:23:03.461 "memory_domains": [ 00:23:03.461 { 00:23:03.461 "dma_device_id": "system", 00:23:03.461 "dma_device_type": 1 00:23:03.461 }, 00:23:03.461 { 00:23:03.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.461 "dma_device_type": 2 00:23:03.461 } 00:23:03.461 ], 00:23:03.461 "driver_specific": {} 00:23:03.461 }' 00:23:03.461 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.461 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.461 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.461 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.461 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.719 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:03.719 [2024-07-15 09:49:31.811587] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.979 09:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.979 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.979 "name": "Existed_Raid", 00:23:03.979 "uuid": "863f639b-428f-11ef-a0af-c98d8ee52a94", 00:23:03.979 "strip_size_kb": 0, 00:23:03.979 "state": "online", 00:23:03.979 "raid_level": "raid1", 00:23:03.979 "superblock": true, 00:23:03.979 "num_base_bdevs": 3, 00:23:03.979 "num_base_bdevs_discovered": 2, 00:23:03.979 "num_base_bdevs_operational": 2, 00:23:03.979 "base_bdevs_list": [ 00:23:03.979 { 00:23:03.979 "name": null, 00:23:03.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.979 "is_configured": false, 00:23:03.979 "data_offset": 2048, 00:23:03.979 "data_size": 63488 00:23:03.979 }, 00:23:03.979 { 00:23:03.979 "name": "BaseBdev2", 00:23:03.979 "uuid": "86c6e6c7-428f-11ef-a0af-c98d8ee52a94", 00:23:03.979 "is_configured": true, 00:23:03.979 "data_offset": 2048, 00:23:03.979 "data_size": 63488 00:23:03.979 }, 00:23:03.979 { 00:23:03.979 "name": "BaseBdev3", 00:23:03.979 "uuid": "8784bd89-428f-11ef-a0af-c98d8ee52a94", 00:23:03.979 "is_configured": true, 00:23:03.979 "data_offset": 2048, 00:23:03.979 "data_size": 63488 00:23:03.979 } 00:23:03.979 ] 00:23:03.979 }' 00:23:03.979 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.979 09:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.550 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:04.550 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:04.550 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.550 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:04.809 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:04.809 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:04.809 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:04.809 [2024-07-15 09:49:32.888661] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:05.133 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:05.134 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:05.134 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:05.134 09:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.134 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:05.134 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:05.134 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:05.392 [2024-07-15 09:49:33.413614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:05.393 [2024-07-15 09:49:33.413654] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.393 [2024-07-15 09:49:33.422423] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.393 [2024-07-15 09:49:33.422443] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.393 [2024-07-15 09:49:33.422447] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12de80c34a00 name Existed_Raid, state offline 00:23:05.393 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:05.393 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:05.393 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.393 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:05.654 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:05.654 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:05.654 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:23:05.654 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:05.654 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:05.654 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:05.913 BaseBdev2 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:05.913 09:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.173 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:06.432 [ 00:23:06.432 { 00:23:06.432 "name": "BaseBdev2", 00:23:06.432 "aliases": [ 00:23:06.432 "8a3b287e-428f-11ef-a0af-c98d8ee52a94" 00:23:06.432 ], 00:23:06.432 "product_name": "Malloc disk", 00:23:06.432 "block_size": 512, 00:23:06.432 "num_blocks": 65536, 00:23:06.432 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:06.432 "assigned_rate_limits": { 00:23:06.432 "rw_ios_per_sec": 0, 00:23:06.432 "rw_mbytes_per_sec": 0, 00:23:06.432 "r_mbytes_per_sec": 0, 00:23:06.432 "w_mbytes_per_sec": 0 00:23:06.432 }, 00:23:06.432 "claimed": false, 00:23:06.432 "zoned": false, 00:23:06.432 "supported_io_types": { 00:23:06.432 "read": true, 00:23:06.432 "write": true, 00:23:06.432 "unmap": true, 00:23:06.432 "flush": true, 00:23:06.432 "reset": true, 00:23:06.432 "nvme_admin": false, 00:23:06.432 "nvme_io": false, 00:23:06.432 "nvme_io_md": false, 00:23:06.432 "write_zeroes": true, 00:23:06.432 "zcopy": true, 00:23:06.432 "get_zone_info": false, 00:23:06.433 "zone_management": false, 00:23:06.433 "zone_append": false, 00:23:06.433 "compare": false, 00:23:06.433 "compare_and_write": false, 00:23:06.433 "abort": true, 00:23:06.433 "seek_hole": false, 00:23:06.433 "seek_data": false, 00:23:06.433 "copy": true, 00:23:06.433 "nvme_iov_md": false 00:23:06.433 }, 00:23:06.433 "memory_domains": [ 00:23:06.433 { 00:23:06.433 "dma_device_id": "system", 00:23:06.433 "dma_device_type": 1 00:23:06.433 }, 00:23:06.433 { 00:23:06.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.433 "dma_device_type": 2 00:23:06.433 } 00:23:06.433 ], 00:23:06.433 "driver_specific": {} 00:23:06.433 } 00:23:06.433 ] 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:06.433 BaseBdev3 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:06.433 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:07.000 09:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:07.000 [ 00:23:07.000 { 00:23:07.000 "name": "BaseBdev3", 00:23:07.000 "aliases": [ 00:23:07.000 "8a9d6e28-428f-11ef-a0af-c98d8ee52a94" 00:23:07.000 ], 00:23:07.000 "product_name": "Malloc disk", 00:23:07.000 "block_size": 512, 00:23:07.000 "num_blocks": 65536, 00:23:07.000 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:07.000 "assigned_rate_limits": { 00:23:07.000 "rw_ios_per_sec": 0, 00:23:07.000 "rw_mbytes_per_sec": 0, 00:23:07.000 "r_mbytes_per_sec": 0, 00:23:07.000 "w_mbytes_per_sec": 0 00:23:07.000 }, 00:23:07.000 "claimed": false, 00:23:07.000 "zoned": false, 00:23:07.000 "supported_io_types": { 00:23:07.000 "read": true, 00:23:07.000 "write": true, 00:23:07.000 "unmap": true, 00:23:07.000 "flush": true, 00:23:07.000 "reset": true, 00:23:07.000 "nvme_admin": false, 00:23:07.000 "nvme_io": false, 00:23:07.000 "nvme_io_md": false, 00:23:07.000 "write_zeroes": true, 00:23:07.000 "zcopy": true, 00:23:07.000 "get_zone_info": false, 00:23:07.000 "zone_management": false, 00:23:07.000 "zone_append": false, 00:23:07.000 "compare": false, 00:23:07.000 "compare_and_write": false, 00:23:07.000 "abort": true, 00:23:07.000 "seek_hole": false, 00:23:07.000 "seek_data": false, 00:23:07.000 "copy": true, 00:23:07.000 "nvme_iov_md": false 00:23:07.000 }, 00:23:07.000 "memory_domains": [ 00:23:07.000 { 00:23:07.000 "dma_device_id": "system", 00:23:07.000 "dma_device_type": 1 00:23:07.000 }, 00:23:07.000 { 00:23:07.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.000 "dma_device_type": 2 00:23:07.000 } 00:23:07.000 ], 00:23:07.000 "driver_specific": {} 00:23:07.000 } 00:23:07.000 ] 00:23:07.000 09:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:07.000 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:07.000 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:07.000 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:07.258 [2024-07-15 09:49:35.294482] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.258 [2024-07-15 09:49:35.294547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.258 [2024-07-15 09:49:35.294555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.258 [2024-07-15 09:49:35.295201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.258 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.515 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.515 "name": "Existed_Raid", 00:23:07.515 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:07.515 "strip_size_kb": 0, 00:23:07.515 "state": "configuring", 00:23:07.515 "raid_level": "raid1", 00:23:07.515 "superblock": true, 00:23:07.515 "num_base_bdevs": 3, 00:23:07.515 "num_base_bdevs_discovered": 2, 00:23:07.515 "num_base_bdevs_operational": 3, 00:23:07.515 "base_bdevs_list": [ 00:23:07.515 { 00:23:07.515 "name": "BaseBdev1", 00:23:07.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.515 "is_configured": false, 00:23:07.515 "data_offset": 0, 00:23:07.515 "data_size": 0 00:23:07.515 }, 00:23:07.515 { 00:23:07.515 "name": "BaseBdev2", 00:23:07.515 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:07.515 "is_configured": true, 00:23:07.515 "data_offset": 2048, 00:23:07.515 "data_size": 63488 00:23:07.515 }, 00:23:07.515 { 00:23:07.515 "name": "BaseBdev3", 00:23:07.515 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:07.515 "is_configured": true, 00:23:07.515 "data_offset": 2048, 00:23:07.515 "data_size": 63488 00:23:07.515 } 00:23:07.515 ] 00:23:07.515 }' 00:23:07.515 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.515 09:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.774 09:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:08.034 [2024-07-15 09:49:36.010518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.034 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.294 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.294 "name": "Existed_Raid", 00:23:08.294 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:08.294 "strip_size_kb": 0, 00:23:08.294 "state": "configuring", 00:23:08.294 "raid_level": "raid1", 00:23:08.294 "superblock": true, 00:23:08.294 "num_base_bdevs": 3, 00:23:08.294 "num_base_bdevs_discovered": 1, 00:23:08.294 "num_base_bdevs_operational": 3, 00:23:08.294 "base_bdevs_list": [ 00:23:08.294 { 00:23:08.294 "name": "BaseBdev1", 00:23:08.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.294 "is_configured": false, 00:23:08.294 "data_offset": 0, 00:23:08.294 "data_size": 0 00:23:08.294 }, 00:23:08.294 { 00:23:08.294 "name": null, 00:23:08.294 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:08.294 "is_configured": false, 00:23:08.294 "data_offset": 2048, 00:23:08.294 "data_size": 63488 00:23:08.294 }, 00:23:08.294 { 00:23:08.294 "name": "BaseBdev3", 00:23:08.294 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:08.294 "is_configured": true, 00:23:08.294 "data_offset": 2048, 00:23:08.294 "data_size": 63488 00:23:08.294 } 00:23:08.294 ] 00:23:08.294 }' 00:23:08.294 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.294 09:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.555 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:08.555 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.814 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:08.814 09:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:09.074 [2024-07-15 09:49:36.990699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.074 BaseBdev1 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:09.074 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:09.332 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:09.332 [ 00:23:09.332 { 00:23:09.332 "name": "BaseBdev1", 00:23:09.332 "aliases": [ 00:23:09.332 "8c17de8d-428f-11ef-a0af-c98d8ee52a94" 00:23:09.332 ], 00:23:09.332 "product_name": "Malloc disk", 00:23:09.332 "block_size": 512, 00:23:09.332 "num_blocks": 65536, 00:23:09.332 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:09.332 "assigned_rate_limits": { 00:23:09.332 "rw_ios_per_sec": 0, 00:23:09.332 "rw_mbytes_per_sec": 0, 00:23:09.332 "r_mbytes_per_sec": 0, 00:23:09.332 "w_mbytes_per_sec": 0 00:23:09.332 }, 00:23:09.332 "claimed": true, 00:23:09.332 "claim_type": "exclusive_write", 00:23:09.332 "zoned": false, 00:23:09.332 "supported_io_types": { 00:23:09.332 "read": true, 00:23:09.332 "write": true, 00:23:09.332 "unmap": true, 00:23:09.332 "flush": true, 00:23:09.332 "reset": true, 00:23:09.332 "nvme_admin": false, 00:23:09.332 "nvme_io": false, 00:23:09.332 "nvme_io_md": false, 00:23:09.332 "write_zeroes": true, 00:23:09.332 "zcopy": true, 00:23:09.332 "get_zone_info": false, 00:23:09.332 "zone_management": false, 00:23:09.332 "zone_append": false, 00:23:09.332 "compare": false, 00:23:09.332 "compare_and_write": false, 00:23:09.332 "abort": true, 00:23:09.332 "seek_hole": false, 00:23:09.332 "seek_data": false, 00:23:09.332 "copy": true, 00:23:09.332 "nvme_iov_md": false 00:23:09.332 }, 00:23:09.332 "memory_domains": [ 00:23:09.332 { 00:23:09.332 "dma_device_id": "system", 00:23:09.332 "dma_device_type": 1 00:23:09.332 }, 00:23:09.332 { 00:23:09.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.332 "dma_device_type": 2 00:23:09.332 } 00:23:09.332 ], 00:23:09.332 "driver_specific": {} 00:23:09.332 } 00:23:09.332 ] 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.589 "name": "Existed_Raid", 00:23:09.589 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:09.589 "strip_size_kb": 0, 00:23:09.589 "state": "configuring", 00:23:09.589 "raid_level": "raid1", 00:23:09.589 "superblock": true, 00:23:09.589 "num_base_bdevs": 3, 00:23:09.589 "num_base_bdevs_discovered": 2, 00:23:09.589 "num_base_bdevs_operational": 3, 00:23:09.589 "base_bdevs_list": [ 00:23:09.589 { 00:23:09.589 "name": "BaseBdev1", 00:23:09.589 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:09.589 "is_configured": true, 00:23:09.589 "data_offset": 2048, 00:23:09.589 "data_size": 63488 00:23:09.589 }, 00:23:09.589 { 00:23:09.589 "name": null, 00:23:09.589 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:09.589 "is_configured": false, 00:23:09.589 "data_offset": 2048, 00:23:09.589 "data_size": 63488 00:23:09.589 }, 00:23:09.589 { 00:23:09.589 "name": "BaseBdev3", 00:23:09.589 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:09.589 "is_configured": true, 00:23:09.589 "data_offset": 2048, 00:23:09.589 "data_size": 63488 00:23:09.589 } 00:23:09.589 ] 00:23:09.589 }' 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.589 09:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.154 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.154 09:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:10.154 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:10.154 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:10.412 [2024-07-15 09:49:38.438652] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.412 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.670 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.671 "name": "Existed_Raid", 00:23:10.671 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:10.671 "strip_size_kb": 0, 00:23:10.671 "state": "configuring", 00:23:10.671 "raid_level": "raid1", 00:23:10.671 "superblock": true, 00:23:10.671 "num_base_bdevs": 3, 00:23:10.671 "num_base_bdevs_discovered": 1, 00:23:10.671 "num_base_bdevs_operational": 3, 00:23:10.671 "base_bdevs_list": [ 00:23:10.671 { 00:23:10.671 "name": "BaseBdev1", 00:23:10.671 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:10.671 "is_configured": true, 00:23:10.671 "data_offset": 2048, 00:23:10.671 "data_size": 63488 00:23:10.671 }, 00:23:10.671 { 00:23:10.671 "name": null, 00:23:10.671 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:10.671 "is_configured": false, 00:23:10.671 "data_offset": 2048, 00:23:10.671 "data_size": 63488 00:23:10.671 }, 00:23:10.671 { 00:23:10.671 "name": null, 00:23:10.671 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:10.671 "is_configured": false, 00:23:10.671 "data_offset": 2048, 00:23:10.671 "data_size": 63488 00:23:10.671 } 00:23:10.671 ] 00:23:10.671 }' 00:23:10.671 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.671 09:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.928 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.928 09:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:11.186 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:11.186 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:11.445 [2024-07-15 09:49:39.406719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.445 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.707 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.707 "name": "Existed_Raid", 00:23:11.707 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:11.707 "strip_size_kb": 0, 00:23:11.707 "state": "configuring", 00:23:11.707 "raid_level": "raid1", 00:23:11.707 "superblock": true, 00:23:11.707 "num_base_bdevs": 3, 00:23:11.707 "num_base_bdevs_discovered": 2, 00:23:11.707 "num_base_bdevs_operational": 3, 00:23:11.707 "base_bdevs_list": [ 00:23:11.707 { 00:23:11.707 "name": "BaseBdev1", 00:23:11.707 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:11.707 "is_configured": true, 00:23:11.707 "data_offset": 2048, 00:23:11.707 "data_size": 63488 00:23:11.707 }, 00:23:11.707 { 00:23:11.707 "name": null, 00:23:11.707 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:11.707 "is_configured": false, 00:23:11.707 "data_offset": 2048, 00:23:11.707 "data_size": 63488 00:23:11.707 }, 00:23:11.707 { 00:23:11.707 "name": "BaseBdev3", 00:23:11.707 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:11.707 "is_configured": true, 00:23:11.707 "data_offset": 2048, 00:23:11.707 "data_size": 63488 00:23:11.707 } 00:23:11.707 ] 00:23:11.707 }' 00:23:11.707 09:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.707 09:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.976 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.976 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:12.233 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:12.233 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:12.492 [2024-07-15 09:49:40.470792] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.492 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.750 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.751 "name": "Existed_Raid", 00:23:12.751 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:12.751 "strip_size_kb": 0, 00:23:12.751 "state": "configuring", 00:23:12.751 "raid_level": "raid1", 00:23:12.751 "superblock": true, 00:23:12.751 "num_base_bdevs": 3, 00:23:12.751 "num_base_bdevs_discovered": 1, 00:23:12.751 "num_base_bdevs_operational": 3, 00:23:12.751 "base_bdevs_list": [ 00:23:12.751 { 00:23:12.751 "name": null, 00:23:12.751 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:12.751 "is_configured": false, 00:23:12.751 "data_offset": 2048, 00:23:12.751 "data_size": 63488 00:23:12.751 }, 00:23:12.751 { 00:23:12.751 "name": null, 00:23:12.751 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:12.751 "is_configured": false, 00:23:12.751 "data_offset": 2048, 00:23:12.751 "data_size": 63488 00:23:12.751 }, 00:23:12.751 { 00:23:12.751 "name": "BaseBdev3", 00:23:12.751 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:12.751 "is_configured": true, 00:23:12.751 "data_offset": 2048, 00:23:12.751 "data_size": 63488 00:23:12.751 } 00:23:12.751 ] 00:23:12.751 }' 00:23:12.751 09:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.751 09:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.318 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.318 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:13.318 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:13.318 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:13.577 [2024-07-15 09:49:41.575457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.577 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.836 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.836 "name": "Existed_Raid", 00:23:13.836 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:13.836 "strip_size_kb": 0, 00:23:13.836 "state": "configuring", 00:23:13.836 "raid_level": "raid1", 00:23:13.836 "superblock": true, 00:23:13.836 "num_base_bdevs": 3, 00:23:13.836 "num_base_bdevs_discovered": 2, 00:23:13.836 "num_base_bdevs_operational": 3, 00:23:13.836 "base_bdevs_list": [ 00:23:13.836 { 00:23:13.836 "name": null, 00:23:13.836 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:13.836 "is_configured": false, 00:23:13.836 "data_offset": 2048, 00:23:13.836 "data_size": 63488 00:23:13.836 }, 00:23:13.836 { 00:23:13.836 "name": "BaseBdev2", 00:23:13.836 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:13.836 "is_configured": true, 00:23:13.836 "data_offset": 2048, 00:23:13.836 "data_size": 63488 00:23:13.836 }, 00:23:13.836 { 00:23:13.836 "name": "BaseBdev3", 00:23:13.836 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:13.836 "is_configured": true, 00:23:13.836 "data_offset": 2048, 00:23:13.836 "data_size": 63488 00:23:13.836 } 00:23:13.836 ] 00:23:13.836 }' 00:23:13.836 09:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.836 09:49:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.401 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.401 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:14.401 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:14.401 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.401 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:14.658 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8c17de8d-428f-11ef-a0af-c98d8ee52a94 00:23:14.917 [2024-07-15 09:49:42.859637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:14.917 [2024-07-15 09:49:42.859687] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12de80c34f00 00:23:14.917 [2024-07-15 09:49:42.859691] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:14.917 [2024-07-15 09:49:42.859707] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12de80c97e20 00:23:14.917 [2024-07-15 09:49:42.859745] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12de80c34f00 00:23:14.917 [2024-07-15 09:49:42.859748] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x12de80c34f00 00:23:14.917 [2024-07-15 09:49:42.859764] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.917 NewBaseBdev 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:14.917 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.194 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:15.451 [ 00:23:15.451 { 00:23:15.451 "name": "NewBaseBdev", 00:23:15.451 "aliases": [ 00:23:15.451 "8c17de8d-428f-11ef-a0af-c98d8ee52a94" 00:23:15.451 ], 00:23:15.451 "product_name": "Malloc disk", 00:23:15.451 "block_size": 512, 00:23:15.451 "num_blocks": 65536, 00:23:15.451 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:15.451 "assigned_rate_limits": { 00:23:15.451 "rw_ios_per_sec": 0, 00:23:15.451 "rw_mbytes_per_sec": 0, 00:23:15.451 "r_mbytes_per_sec": 0, 00:23:15.451 "w_mbytes_per_sec": 0 00:23:15.451 }, 00:23:15.451 "claimed": true, 00:23:15.451 "claim_type": "exclusive_write", 00:23:15.451 "zoned": false, 00:23:15.451 "supported_io_types": { 00:23:15.451 "read": true, 00:23:15.451 "write": true, 00:23:15.451 "unmap": true, 00:23:15.451 "flush": true, 00:23:15.451 "reset": true, 00:23:15.451 "nvme_admin": false, 00:23:15.451 "nvme_io": false, 00:23:15.451 "nvme_io_md": false, 00:23:15.451 "write_zeroes": true, 00:23:15.451 "zcopy": true, 00:23:15.451 "get_zone_info": false, 00:23:15.451 "zone_management": false, 00:23:15.451 "zone_append": false, 00:23:15.451 "compare": false, 00:23:15.451 "compare_and_write": false, 00:23:15.451 "abort": true, 00:23:15.451 "seek_hole": false, 00:23:15.451 "seek_data": false, 00:23:15.451 "copy": true, 00:23:15.451 "nvme_iov_md": false 00:23:15.451 }, 00:23:15.451 "memory_domains": [ 00:23:15.451 { 00:23:15.451 "dma_device_id": "system", 00:23:15.451 "dma_device_type": 1 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.451 "dma_device_type": 2 00:23:15.451 } 00:23:15.451 ], 00:23:15.451 "driver_specific": {} 00:23:15.451 } 00:23:15.451 ] 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.451 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.709 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.709 "name": "Existed_Raid", 00:23:15.709 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:15.709 "strip_size_kb": 0, 00:23:15.710 "state": "online", 00:23:15.710 "raid_level": "raid1", 00:23:15.710 "superblock": true, 00:23:15.710 "num_base_bdevs": 3, 00:23:15.710 "num_base_bdevs_discovered": 3, 00:23:15.710 "num_base_bdevs_operational": 3, 00:23:15.710 "base_bdevs_list": [ 00:23:15.710 { 00:23:15.710 "name": "NewBaseBdev", 00:23:15.710 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:15.710 "is_configured": true, 00:23:15.710 "data_offset": 2048, 00:23:15.710 "data_size": 63488 00:23:15.710 }, 00:23:15.710 { 00:23:15.710 "name": "BaseBdev2", 00:23:15.710 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:15.710 "is_configured": true, 00:23:15.710 "data_offset": 2048, 00:23:15.710 "data_size": 63488 00:23:15.710 }, 00:23:15.710 { 00:23:15.710 "name": "BaseBdev3", 00:23:15.710 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:15.710 "is_configured": true, 00:23:15.710 "data_offset": 2048, 00:23:15.710 "data_size": 63488 00:23:15.710 } 00:23:15.710 ] 00:23:15.710 }' 00:23:15.710 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.710 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:15.967 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:15.967 [2024-07-15 09:49:44.055614] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.225 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:16.225 "name": "Existed_Raid", 00:23:16.225 "aliases": [ 00:23:16.225 "8b15106a-428f-11ef-a0af-c98d8ee52a94" 00:23:16.225 ], 00:23:16.225 "product_name": "Raid Volume", 00:23:16.226 "block_size": 512, 00:23:16.226 "num_blocks": 63488, 00:23:16.226 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:16.226 "assigned_rate_limits": { 00:23:16.226 "rw_ios_per_sec": 0, 00:23:16.226 "rw_mbytes_per_sec": 0, 00:23:16.226 "r_mbytes_per_sec": 0, 00:23:16.226 "w_mbytes_per_sec": 0 00:23:16.226 }, 00:23:16.226 "claimed": false, 00:23:16.226 "zoned": false, 00:23:16.226 "supported_io_types": { 00:23:16.226 "read": true, 00:23:16.226 "write": true, 00:23:16.226 "unmap": false, 00:23:16.226 "flush": false, 00:23:16.226 "reset": true, 00:23:16.226 "nvme_admin": false, 00:23:16.226 "nvme_io": false, 00:23:16.226 "nvme_io_md": false, 00:23:16.226 "write_zeroes": true, 00:23:16.226 "zcopy": false, 00:23:16.226 "get_zone_info": false, 00:23:16.226 "zone_management": false, 00:23:16.226 "zone_append": false, 00:23:16.226 "compare": false, 00:23:16.226 "compare_and_write": false, 00:23:16.226 "abort": false, 00:23:16.226 "seek_hole": false, 00:23:16.226 "seek_data": false, 00:23:16.226 "copy": false, 00:23:16.226 "nvme_iov_md": false 00:23:16.226 }, 00:23:16.226 "memory_domains": [ 00:23:16.226 { 00:23:16.226 "dma_device_id": "system", 00:23:16.226 "dma_device_type": 1 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.226 "dma_device_type": 2 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "dma_device_id": "system", 00:23:16.226 "dma_device_type": 1 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.226 "dma_device_type": 2 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "dma_device_id": "system", 00:23:16.226 "dma_device_type": 1 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.226 "dma_device_type": 2 00:23:16.226 } 00:23:16.226 ], 00:23:16.226 "driver_specific": { 00:23:16.226 "raid": { 00:23:16.226 "uuid": "8b15106a-428f-11ef-a0af-c98d8ee52a94", 00:23:16.226 "strip_size_kb": 0, 00:23:16.226 "state": "online", 00:23:16.226 "raid_level": "raid1", 00:23:16.226 "superblock": true, 00:23:16.226 "num_base_bdevs": 3, 00:23:16.226 "num_base_bdevs_discovered": 3, 00:23:16.226 "num_base_bdevs_operational": 3, 00:23:16.226 "base_bdevs_list": [ 00:23:16.226 { 00:23:16.226 "name": "NewBaseBdev", 00:23:16.226 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:16.226 "is_configured": true, 00:23:16.226 "data_offset": 2048, 00:23:16.226 "data_size": 63488 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "name": "BaseBdev2", 00:23:16.226 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:16.226 "is_configured": true, 00:23:16.226 "data_offset": 2048, 00:23:16.226 "data_size": 63488 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "name": "BaseBdev3", 00:23:16.226 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:16.226 "is_configured": true, 00:23:16.226 "data_offset": 2048, 00:23:16.226 "data_size": 63488 00:23:16.226 } 00:23:16.226 ] 00:23:16.226 } 00:23:16.226 } 00:23:16.226 }' 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:16.226 BaseBdev2 00:23:16.226 BaseBdev3' 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.226 "name": "NewBaseBdev", 00:23:16.226 "aliases": [ 00:23:16.226 "8c17de8d-428f-11ef-a0af-c98d8ee52a94" 00:23:16.226 ], 00:23:16.226 "product_name": "Malloc disk", 00:23:16.226 "block_size": 512, 00:23:16.226 "num_blocks": 65536, 00:23:16.226 "uuid": "8c17de8d-428f-11ef-a0af-c98d8ee52a94", 00:23:16.226 "assigned_rate_limits": { 00:23:16.226 "rw_ios_per_sec": 0, 00:23:16.226 "rw_mbytes_per_sec": 0, 00:23:16.226 "r_mbytes_per_sec": 0, 00:23:16.226 "w_mbytes_per_sec": 0 00:23:16.226 }, 00:23:16.226 "claimed": true, 00:23:16.226 "claim_type": "exclusive_write", 00:23:16.226 "zoned": false, 00:23:16.226 "supported_io_types": { 00:23:16.226 "read": true, 00:23:16.226 "write": true, 00:23:16.226 "unmap": true, 00:23:16.226 "flush": true, 00:23:16.226 "reset": true, 00:23:16.226 "nvme_admin": false, 00:23:16.226 "nvme_io": false, 00:23:16.226 "nvme_io_md": false, 00:23:16.226 "write_zeroes": true, 00:23:16.226 "zcopy": true, 00:23:16.226 "get_zone_info": false, 00:23:16.226 "zone_management": false, 00:23:16.226 "zone_append": false, 00:23:16.226 "compare": false, 00:23:16.226 "compare_and_write": false, 00:23:16.226 "abort": true, 00:23:16.226 "seek_hole": false, 00:23:16.226 "seek_data": false, 00:23:16.226 "copy": true, 00:23:16.226 "nvme_iov_md": false 00:23:16.226 }, 00:23:16.226 "memory_domains": [ 00:23:16.226 { 00:23:16.226 "dma_device_id": "system", 00:23:16.226 "dma_device_type": 1 00:23:16.226 }, 00:23:16.226 { 00:23:16.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.226 "dma_device_type": 2 00:23:16.226 } 00:23:16.226 ], 00:23:16.226 "driver_specific": {} 00:23:16.226 }' 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.226 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.484 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.484 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.484 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.484 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.484 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.485 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.485 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.485 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.485 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:16.485 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.743 "name": "BaseBdev2", 00:23:16.743 "aliases": [ 00:23:16.743 "8a3b287e-428f-11ef-a0af-c98d8ee52a94" 00:23:16.743 ], 00:23:16.743 "product_name": "Malloc disk", 00:23:16.743 "block_size": 512, 00:23:16.743 "num_blocks": 65536, 00:23:16.743 "uuid": "8a3b287e-428f-11ef-a0af-c98d8ee52a94", 00:23:16.743 "assigned_rate_limits": { 00:23:16.743 "rw_ios_per_sec": 0, 00:23:16.743 "rw_mbytes_per_sec": 0, 00:23:16.743 "r_mbytes_per_sec": 0, 00:23:16.743 "w_mbytes_per_sec": 0 00:23:16.743 }, 00:23:16.743 "claimed": true, 00:23:16.743 "claim_type": "exclusive_write", 00:23:16.743 "zoned": false, 00:23:16.743 "supported_io_types": { 00:23:16.743 "read": true, 00:23:16.743 "write": true, 00:23:16.743 "unmap": true, 00:23:16.743 "flush": true, 00:23:16.743 "reset": true, 00:23:16.743 "nvme_admin": false, 00:23:16.743 "nvme_io": false, 00:23:16.743 "nvme_io_md": false, 00:23:16.743 "write_zeroes": true, 00:23:16.743 "zcopy": true, 00:23:16.743 "get_zone_info": false, 00:23:16.743 "zone_management": false, 00:23:16.743 "zone_append": false, 00:23:16.743 "compare": false, 00:23:16.743 "compare_and_write": false, 00:23:16.743 "abort": true, 00:23:16.743 "seek_hole": false, 00:23:16.743 "seek_data": false, 00:23:16.743 "copy": true, 00:23:16.743 "nvme_iov_md": false 00:23:16.743 }, 00:23:16.743 "memory_domains": [ 00:23:16.743 { 00:23:16.743 "dma_device_id": "system", 00:23:16.743 "dma_device_type": 1 00:23:16.743 }, 00:23:16.743 { 00:23:16.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.743 "dma_device_type": 2 00:23:16.743 } 00:23:16.743 ], 00:23:16.743 "driver_specific": {} 00:23:16.743 }' 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:16.743 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.002 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.002 "name": "BaseBdev3", 00:23:17.002 "aliases": [ 00:23:17.002 "8a9d6e28-428f-11ef-a0af-c98d8ee52a94" 00:23:17.002 ], 00:23:17.002 "product_name": "Malloc disk", 00:23:17.002 "block_size": 512, 00:23:17.002 "num_blocks": 65536, 00:23:17.002 "uuid": "8a9d6e28-428f-11ef-a0af-c98d8ee52a94", 00:23:17.002 "assigned_rate_limits": { 00:23:17.002 "rw_ios_per_sec": 0, 00:23:17.002 "rw_mbytes_per_sec": 0, 00:23:17.002 "r_mbytes_per_sec": 0, 00:23:17.002 "w_mbytes_per_sec": 0 00:23:17.002 }, 00:23:17.002 "claimed": true, 00:23:17.002 "claim_type": "exclusive_write", 00:23:17.002 "zoned": false, 00:23:17.002 "supported_io_types": { 00:23:17.002 "read": true, 00:23:17.002 "write": true, 00:23:17.002 "unmap": true, 00:23:17.002 "flush": true, 00:23:17.002 "reset": true, 00:23:17.002 "nvme_admin": false, 00:23:17.002 "nvme_io": false, 00:23:17.002 "nvme_io_md": false, 00:23:17.002 "write_zeroes": true, 00:23:17.002 "zcopy": true, 00:23:17.002 "get_zone_info": false, 00:23:17.002 "zone_management": false, 00:23:17.002 "zone_append": false, 00:23:17.003 "compare": false, 00:23:17.003 "compare_and_write": false, 00:23:17.003 "abort": true, 00:23:17.003 "seek_hole": false, 00:23:17.003 "seek_data": false, 00:23:17.003 "copy": true, 00:23:17.003 "nvme_iov_md": false 00:23:17.003 }, 00:23:17.003 "memory_domains": [ 00:23:17.003 { 00:23:17.003 "dma_device_id": "system", 00:23:17.003 "dma_device_type": 1 00:23:17.003 }, 00:23:17.003 { 00:23:17.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.003 "dma_device_type": 2 00:23:17.003 } 00:23:17.003 ], 00:23:17.003 "driver_specific": {} 00:23:17.003 }' 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.003 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:17.261 [2024-07-15 09:49:45.159644] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.261 [2024-07-15 09:49:45.159670] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.261 [2024-07-15 09:49:45.159686] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.261 [2024-07-15 09:49:45.159787] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.261 [2024-07-15 09:49:45.159792] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12de80c34f00 name Existed_Raid, state offline 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56713 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56713 ']' 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56713 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56713 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:23:17.261 killing process with pid 56713 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56713' 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56713 00:23:17.261 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56713 00:23:17.261 [2024-07-15 09:49:45.191079] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.261 [2024-07-15 09:49:45.217887] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:17.519 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:17.519 00:23:17.519 real 0m22.406s 00:23:17.519 user 0m40.067s 00:23:17.519 sys 0m3.954s 00:23:17.519 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.519 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.519 ************************************ 00:23:17.519 END TEST raid_state_function_test_sb 00:23:17.519 ************************************ 00:23:17.519 09:49:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:17.519 09:49:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:23:17.519 09:49:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:17.519 09:49:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.519 09:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.519 ************************************ 00:23:17.519 START TEST raid_superblock_test 00:23:17.519 ************************************ 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:17.519 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57433 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57433 /var/tmp/spdk-raid.sock 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57433 ']' 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.520 09:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.520 [2024-07-15 09:49:45.540854] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:17.520 [2024-07-15 09:49:45.541182] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:23:18.452 EAL: TSC is not safe to use in SMP mode 00:23:18.452 EAL: TSC is not invariant 00:23:18.452 [2024-07-15 09:49:46.241230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.452 [2024-07-15 09:49:46.348454] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:18.452 [2024-07-15 09:49:46.350933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.452 [2024-07-15 09:49:46.351681] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.452 [2024-07-15 09:49:46.351692] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:18.452 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:18.740 malloc1 00:23:18.740 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:19.000 [2024-07-15 09:49:46.870697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:19.000 [2024-07-15 09:49:46.870761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.000 [2024-07-15 09:49:46.870771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb234780 00:23:19.000 [2024-07-15 09:49:46.870778] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.000 [2024-07-15 09:49:46.871745] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.000 [2024-07-15 09:49:46.871774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:19.000 pt1 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:19.000 09:49:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:19.000 malloc2 00:23:19.000 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.262 [2024-07-15 09:49:47.270714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.262 [2024-07-15 09:49:47.270767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.262 [2024-07-15 09:49:47.270776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb234c80 00:23:19.262 [2024-07-15 09:49:47.270782] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.262 [2024-07-15 09:49:47.271460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.262 [2024-07-15 09:49:47.271490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.262 pt2 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:19.262 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:19.521 malloc3 00:23:19.521 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:19.801 [2024-07-15 09:49:47.718753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:19.801 [2024-07-15 09:49:47.718820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.801 [2024-07-15 09:49:47.718831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb235180 00:23:19.801 [2024-07-15 09:49:47.718838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.801 [2024-07-15 09:49:47.719606] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.801 [2024-07-15 09:49:47.719645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:19.801 pt3 00:23:19.801 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:19.801 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:19.801 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:20.062 [2024-07-15 09:49:47.930769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:20.062 [2024-07-15 09:49:47.931410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:20.062 [2024-07-15 09:49:47.931432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:20.062 [2024-07-15 09:49:47.931481] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f1edb235400 00:23:20.062 [2024-07-15 09:49:47.931486] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:20.062 [2024-07-15 09:49:47.931521] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f1edb297e20 00:23:20.062 [2024-07-15 09:49:47.931593] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f1edb235400 00:23:20.062 [2024-07-15 09:49:47.931596] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f1edb235400 00:23:20.062 [2024-07-15 09:49:47.931618] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.062 09:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.062 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.062 "name": "raid_bdev1", 00:23:20.062 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:20.062 "strip_size_kb": 0, 00:23:20.062 "state": "online", 00:23:20.062 "raid_level": "raid1", 00:23:20.062 "superblock": true, 00:23:20.062 "num_base_bdevs": 3, 00:23:20.062 "num_base_bdevs_discovered": 3, 00:23:20.062 "num_base_bdevs_operational": 3, 00:23:20.062 "base_bdevs_list": [ 00:23:20.062 { 00:23:20.062 "name": "pt1", 00:23:20.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.062 "is_configured": true, 00:23:20.062 "data_offset": 2048, 00:23:20.062 "data_size": 63488 00:23:20.062 }, 00:23:20.062 { 00:23:20.062 "name": "pt2", 00:23:20.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.062 "is_configured": true, 00:23:20.062 "data_offset": 2048, 00:23:20.062 "data_size": 63488 00:23:20.062 }, 00:23:20.062 { 00:23:20.062 "name": "pt3", 00:23:20.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:20.062 "is_configured": true, 00:23:20.062 "data_offset": 2048, 00:23:20.062 "data_size": 63488 00:23:20.062 } 00:23:20.062 ] 00:23:20.062 }' 00:23:20.321 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.321 09:49:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:20.580 [2024-07-15 09:49:48.634830] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:20.580 "name": "raid_bdev1", 00:23:20.580 "aliases": [ 00:23:20.580 "929d3561-428f-11ef-a0af-c98d8ee52a94" 00:23:20.580 ], 00:23:20.580 "product_name": "Raid Volume", 00:23:20.580 "block_size": 512, 00:23:20.580 "num_blocks": 63488, 00:23:20.580 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:20.580 "assigned_rate_limits": { 00:23:20.580 "rw_ios_per_sec": 0, 00:23:20.580 "rw_mbytes_per_sec": 0, 00:23:20.580 "r_mbytes_per_sec": 0, 00:23:20.580 "w_mbytes_per_sec": 0 00:23:20.580 }, 00:23:20.580 "claimed": false, 00:23:20.580 "zoned": false, 00:23:20.580 "supported_io_types": { 00:23:20.580 "read": true, 00:23:20.580 "write": true, 00:23:20.580 "unmap": false, 00:23:20.580 "flush": false, 00:23:20.580 "reset": true, 00:23:20.580 "nvme_admin": false, 00:23:20.580 "nvme_io": false, 00:23:20.580 "nvme_io_md": false, 00:23:20.580 "write_zeroes": true, 00:23:20.580 "zcopy": false, 00:23:20.580 "get_zone_info": false, 00:23:20.580 "zone_management": false, 00:23:20.580 "zone_append": false, 00:23:20.580 "compare": false, 00:23:20.580 "compare_and_write": false, 00:23:20.580 "abort": false, 00:23:20.580 "seek_hole": false, 00:23:20.580 "seek_data": false, 00:23:20.580 "copy": false, 00:23:20.580 "nvme_iov_md": false 00:23:20.580 }, 00:23:20.580 "memory_domains": [ 00:23:20.580 { 00:23:20.580 "dma_device_id": "system", 00:23:20.580 "dma_device_type": 1 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.580 "dma_device_type": 2 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "dma_device_id": "system", 00:23:20.580 "dma_device_type": 1 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.580 "dma_device_type": 2 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "dma_device_id": "system", 00:23:20.580 "dma_device_type": 1 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.580 "dma_device_type": 2 00:23:20.580 } 00:23:20.580 ], 00:23:20.580 "driver_specific": { 00:23:20.580 "raid": { 00:23:20.580 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:20.580 "strip_size_kb": 0, 00:23:20.580 "state": "online", 00:23:20.580 "raid_level": "raid1", 00:23:20.580 "superblock": true, 00:23:20.580 "num_base_bdevs": 3, 00:23:20.580 "num_base_bdevs_discovered": 3, 00:23:20.580 "num_base_bdevs_operational": 3, 00:23:20.580 "base_bdevs_list": [ 00:23:20.580 { 00:23:20.580 "name": "pt1", 00:23:20.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.580 "is_configured": true, 00:23:20.580 "data_offset": 2048, 00:23:20.580 "data_size": 63488 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "name": "pt2", 00:23:20.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.580 "is_configured": true, 00:23:20.580 "data_offset": 2048, 00:23:20.580 "data_size": 63488 00:23:20.580 }, 00:23:20.580 { 00:23:20.580 "name": "pt3", 00:23:20.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:20.580 "is_configured": true, 00:23:20.580 "data_offset": 2048, 00:23:20.580 "data_size": 63488 00:23:20.580 } 00:23:20.580 ] 00:23:20.580 } 00:23:20.580 } 00:23:20.580 }' 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:20.580 pt2 00:23:20.580 pt3' 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:20.580 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.147 "name": "pt1", 00:23:21.147 "aliases": [ 00:23:21.147 "00000000-0000-0000-0000-000000000001" 00:23:21.147 ], 00:23:21.147 "product_name": "passthru", 00:23:21.147 "block_size": 512, 00:23:21.147 "num_blocks": 65536, 00:23:21.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:21.147 "assigned_rate_limits": { 00:23:21.147 "rw_ios_per_sec": 0, 00:23:21.147 "rw_mbytes_per_sec": 0, 00:23:21.147 "r_mbytes_per_sec": 0, 00:23:21.147 "w_mbytes_per_sec": 0 00:23:21.147 }, 00:23:21.147 "claimed": true, 00:23:21.147 "claim_type": "exclusive_write", 00:23:21.147 "zoned": false, 00:23:21.147 "supported_io_types": { 00:23:21.147 "read": true, 00:23:21.147 "write": true, 00:23:21.147 "unmap": true, 00:23:21.147 "flush": true, 00:23:21.147 "reset": true, 00:23:21.147 "nvme_admin": false, 00:23:21.147 "nvme_io": false, 00:23:21.147 "nvme_io_md": false, 00:23:21.147 "write_zeroes": true, 00:23:21.147 "zcopy": true, 00:23:21.147 "get_zone_info": false, 00:23:21.147 "zone_management": false, 00:23:21.147 "zone_append": false, 00:23:21.147 "compare": false, 00:23:21.147 "compare_and_write": false, 00:23:21.147 "abort": true, 00:23:21.147 "seek_hole": false, 00:23:21.147 "seek_data": false, 00:23:21.147 "copy": true, 00:23:21.147 "nvme_iov_md": false 00:23:21.147 }, 00:23:21.147 "memory_domains": [ 00:23:21.147 { 00:23:21.147 "dma_device_id": "system", 00:23:21.147 "dma_device_type": 1 00:23:21.147 }, 00:23:21.147 { 00:23:21.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.147 "dma_device_type": 2 00:23:21.147 } 00:23:21.147 ], 00:23:21.147 "driver_specific": { 00:23:21.147 "passthru": { 00:23:21.147 "name": "pt1", 00:23:21.147 "base_bdev_name": "malloc1" 00:23:21.147 } 00:23:21.147 } 00:23:21.147 }' 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.147 09:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:21.147 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.409 "name": "pt2", 00:23:21.409 "aliases": [ 00:23:21.409 "00000000-0000-0000-0000-000000000002" 00:23:21.409 ], 00:23:21.409 "product_name": "passthru", 00:23:21.409 "block_size": 512, 00:23:21.409 "num_blocks": 65536, 00:23:21.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.409 "assigned_rate_limits": { 00:23:21.409 "rw_ios_per_sec": 0, 00:23:21.409 "rw_mbytes_per_sec": 0, 00:23:21.409 "r_mbytes_per_sec": 0, 00:23:21.409 "w_mbytes_per_sec": 0 00:23:21.409 }, 00:23:21.409 "claimed": true, 00:23:21.409 "claim_type": "exclusive_write", 00:23:21.409 "zoned": false, 00:23:21.409 "supported_io_types": { 00:23:21.409 "read": true, 00:23:21.409 "write": true, 00:23:21.409 "unmap": true, 00:23:21.409 "flush": true, 00:23:21.409 "reset": true, 00:23:21.409 "nvme_admin": false, 00:23:21.409 "nvme_io": false, 00:23:21.409 "nvme_io_md": false, 00:23:21.409 "write_zeroes": true, 00:23:21.409 "zcopy": true, 00:23:21.409 "get_zone_info": false, 00:23:21.409 "zone_management": false, 00:23:21.409 "zone_append": false, 00:23:21.409 "compare": false, 00:23:21.409 "compare_and_write": false, 00:23:21.409 "abort": true, 00:23:21.409 "seek_hole": false, 00:23:21.409 "seek_data": false, 00:23:21.409 "copy": true, 00:23:21.409 "nvme_iov_md": false 00:23:21.409 }, 00:23:21.409 "memory_domains": [ 00:23:21.409 { 00:23:21.409 "dma_device_id": "system", 00:23:21.409 "dma_device_type": 1 00:23:21.409 }, 00:23:21.409 { 00:23:21.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.409 "dma_device_type": 2 00:23:21.409 } 00:23:21.409 ], 00:23:21.409 "driver_specific": { 00:23:21.409 "passthru": { 00:23:21.409 "name": "pt2", 00:23:21.409 "base_bdev_name": "malloc2" 00:23:21.409 } 00:23:21.409 } 00:23:21.409 }' 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:21.409 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.672 "name": "pt3", 00:23:21.672 "aliases": [ 00:23:21.672 "00000000-0000-0000-0000-000000000003" 00:23:21.672 ], 00:23:21.672 "product_name": "passthru", 00:23:21.672 "block_size": 512, 00:23:21.672 "num_blocks": 65536, 00:23:21.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:21.672 "assigned_rate_limits": { 00:23:21.672 "rw_ios_per_sec": 0, 00:23:21.672 "rw_mbytes_per_sec": 0, 00:23:21.672 "r_mbytes_per_sec": 0, 00:23:21.672 "w_mbytes_per_sec": 0 00:23:21.672 }, 00:23:21.672 "claimed": true, 00:23:21.672 "claim_type": "exclusive_write", 00:23:21.672 "zoned": false, 00:23:21.672 "supported_io_types": { 00:23:21.672 "read": true, 00:23:21.672 "write": true, 00:23:21.672 "unmap": true, 00:23:21.672 "flush": true, 00:23:21.672 "reset": true, 00:23:21.672 "nvme_admin": false, 00:23:21.672 "nvme_io": false, 00:23:21.672 "nvme_io_md": false, 00:23:21.672 "write_zeroes": true, 00:23:21.672 "zcopy": true, 00:23:21.672 "get_zone_info": false, 00:23:21.672 "zone_management": false, 00:23:21.672 "zone_append": false, 00:23:21.672 "compare": false, 00:23:21.672 "compare_and_write": false, 00:23:21.672 "abort": true, 00:23:21.672 "seek_hole": false, 00:23:21.672 "seek_data": false, 00:23:21.672 "copy": true, 00:23:21.672 "nvme_iov_md": false 00:23:21.672 }, 00:23:21.672 "memory_domains": [ 00:23:21.672 { 00:23:21.672 "dma_device_id": "system", 00:23:21.672 "dma_device_type": 1 00:23:21.672 }, 00:23:21.672 { 00:23:21.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.672 "dma_device_type": 2 00:23:21.672 } 00:23:21.672 ], 00:23:21.672 "driver_specific": { 00:23:21.672 "passthru": { 00:23:21.672 "name": "pt3", 00:23:21.672 "base_bdev_name": "malloc3" 00:23:21.672 } 00:23:21.672 } 00:23:21.672 }' 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:21.672 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:21.931 [2024-07-15 09:49:49.870875] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:21.931 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=929d3561-428f-11ef-a0af-c98d8ee52a94 00:23:21.931 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 929d3561-428f-11ef-a0af-c98d8ee52a94 ']' 00:23:21.931 09:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.189 [2024-07-15 09:49:50.078857] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.189 [2024-07-15 09:49:50.078884] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.189 [2024-07-15 09:49:50.078902] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.189 [2024-07-15 09:49:50.078936] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.189 [2024-07-15 09:49:50.078940] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f1edb235400 name raid_bdev1, state offline 00:23:22.189 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.189 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:22.446 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:22.446 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:22.446 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:22.446 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:22.446 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:22.446 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:22.704 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:22.704 09:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:22.963 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:22.963 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:23.222 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:23.481 [2024-07-15 09:49:51.478956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:23.481 [2024-07-15 09:49:51.479636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:23.481 [2024-07-15 09:49:51.479656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:23.481 [2024-07-15 09:49:51.479669] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:23.481 [2024-07-15 09:49:51.479706] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:23.481 [2024-07-15 09:49:51.479714] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:23.481 [2024-07-15 09:49:51.479721] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.481 [2024-07-15 09:49:51.479725] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f1edb235180 name raid_bdev1, state configuring 00:23:23.481 request: 00:23:23.481 { 00:23:23.481 "name": "raid_bdev1", 00:23:23.481 "raid_level": "raid1", 00:23:23.481 "base_bdevs": [ 00:23:23.481 "malloc1", 00:23:23.481 "malloc2", 00:23:23.481 "malloc3" 00:23:23.481 ], 00:23:23.481 "superblock": false, 00:23:23.481 "method": "bdev_raid_create", 00:23:23.481 "req_id": 1 00:23:23.481 } 00:23:23.481 Got JSON-RPC error response 00:23:23.481 response: 00:23:23.481 { 00:23:23.481 "code": -17, 00:23:23.481 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:23.481 } 00:23:23.481 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:23:23.481 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:23.481 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:23.481 09:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:23.481 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.481 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:23.739 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:23.739 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:23.739 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:23.999 [2024-07-15 09:49:51.874982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:23.999 [2024-07-15 09:49:51.875042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.999 [2024-07-15 09:49:51.875052] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb234c80 00:23:23.999 [2024-07-15 09:49:51.875058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.999 [2024-07-15 09:49:51.875820] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.999 [2024-07-15 09:49:51.875848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:23.999 [2024-07-15 09:49:51.875869] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:23.999 [2024-07-15 09:49:51.875881] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:23.999 pt1 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.999 09:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.999 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.999 "name": "raid_bdev1", 00:23:23.999 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:23.999 "strip_size_kb": 0, 00:23:23.999 "state": "configuring", 00:23:23.999 "raid_level": "raid1", 00:23:23.999 "superblock": true, 00:23:23.999 "num_base_bdevs": 3, 00:23:23.999 "num_base_bdevs_discovered": 1, 00:23:23.999 "num_base_bdevs_operational": 3, 00:23:23.999 "base_bdevs_list": [ 00:23:23.999 { 00:23:23.999 "name": "pt1", 00:23:23.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:23.999 "is_configured": true, 00:23:23.999 "data_offset": 2048, 00:23:23.999 "data_size": 63488 00:23:23.999 }, 00:23:23.999 { 00:23:23.999 "name": null, 00:23:23.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.999 "is_configured": false, 00:23:23.999 "data_offset": 2048, 00:23:23.999 "data_size": 63488 00:23:23.999 }, 00:23:23.999 { 00:23:23.999 "name": null, 00:23:23.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:23.999 "is_configured": false, 00:23:23.999 "data_offset": 2048, 00:23:23.999 "data_size": 63488 00:23:23.999 } 00:23:23.999 ] 00:23:23.999 }' 00:23:23.999 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.999 09:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:23:24.568 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:24.568 [2024-07-15 09:49:52.591013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:24.568 [2024-07-15 09:49:52.591069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.568 [2024-07-15 09:49:52.591079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb235680 00:23:24.568 [2024-07-15 09:49:52.591085] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.568 [2024-07-15 09:49:52.591184] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.568 [2024-07-15 09:49:52.591191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:24.568 [2024-07-15 09:49:52.591207] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:24.568 [2024-07-15 09:49:52.591214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:24.568 pt2 00:23:24.568 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:24.828 [2024-07-15 09:49:52.795042] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.828 09:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.088 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.088 "name": "raid_bdev1", 00:23:25.088 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:25.088 "strip_size_kb": 0, 00:23:25.088 "state": "configuring", 00:23:25.088 "raid_level": "raid1", 00:23:25.088 "superblock": true, 00:23:25.088 "num_base_bdevs": 3, 00:23:25.088 "num_base_bdevs_discovered": 1, 00:23:25.088 "num_base_bdevs_operational": 3, 00:23:25.088 "base_bdevs_list": [ 00:23:25.088 { 00:23:25.088 "name": "pt1", 00:23:25.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:25.088 "is_configured": true, 00:23:25.088 "data_offset": 2048, 00:23:25.088 "data_size": 63488 00:23:25.088 }, 00:23:25.088 { 00:23:25.088 "name": null, 00:23:25.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:25.088 "is_configured": false, 00:23:25.088 "data_offset": 2048, 00:23:25.088 "data_size": 63488 00:23:25.088 }, 00:23:25.088 { 00:23:25.088 "name": null, 00:23:25.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:25.088 "is_configured": false, 00:23:25.088 "data_offset": 2048, 00:23:25.088 "data_size": 63488 00:23:25.088 } 00:23:25.088 ] 00:23:25.088 }' 00:23:25.088 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.088 09:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.347 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:25.347 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:25.347 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:25.606 [2024-07-15 09:49:53.511123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:25.606 [2024-07-15 09:49:53.511188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.606 [2024-07-15 09:49:53.511200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb235680 00:23:25.606 [2024-07-15 09:49:53.511206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.606 [2024-07-15 09:49:53.511334] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.606 [2024-07-15 09:49:53.511341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:25.606 [2024-07-15 09:49:53.511363] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:25.606 [2024-07-15 09:49:53.511370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:25.606 pt2 00:23:25.606 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:25.606 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:25.606 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:25.606 [2024-07-15 09:49:53.699117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:25.606 [2024-07-15 09:49:53.699170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.606 [2024-07-15 09:49:53.699178] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb235400 00:23:25.606 [2024-07-15 09:49:53.699185] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.606 [2024-07-15 09:49:53.699267] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.606 [2024-07-15 09:49:53.699274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:25.606 [2024-07-15 09:49:53.699288] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:25.606 [2024-07-15 09:49:53.699294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:25.606 [2024-07-15 09:49:53.699316] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f1edb234780 00:23:25.606 [2024-07-15 09:49:53.699319] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:25.606 [2024-07-15 09:49:53.699336] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f1edb297e20 00:23:25.606 [2024-07-15 09:49:53.699380] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f1edb234780 00:23:25.606 [2024-07-15 09:49:53.699384] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f1edb234780 00:23:25.606 [2024-07-15 09:49:53.699416] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.606 pt3 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.866 "name": "raid_bdev1", 00:23:25.866 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:25.866 "strip_size_kb": 0, 00:23:25.866 "state": "online", 00:23:25.866 "raid_level": "raid1", 00:23:25.866 "superblock": true, 00:23:25.866 "num_base_bdevs": 3, 00:23:25.866 "num_base_bdevs_discovered": 3, 00:23:25.866 "num_base_bdevs_operational": 3, 00:23:25.866 "base_bdevs_list": [ 00:23:25.866 { 00:23:25.866 "name": "pt1", 00:23:25.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:25.866 "is_configured": true, 00:23:25.866 "data_offset": 2048, 00:23:25.866 "data_size": 63488 00:23:25.866 }, 00:23:25.866 { 00:23:25.866 "name": "pt2", 00:23:25.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:25.866 "is_configured": true, 00:23:25.866 "data_offset": 2048, 00:23:25.866 "data_size": 63488 00:23:25.866 }, 00:23:25.866 { 00:23:25.866 "name": "pt3", 00:23:25.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:25.866 "is_configured": true, 00:23:25.866 "data_offset": 2048, 00:23:25.866 "data_size": 63488 00:23:25.866 } 00:23:25.866 ] 00:23:25.866 }' 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.866 09:49:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:26.124 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:26.384 [2024-07-15 09:49:54.415183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:26.384 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:26.384 "name": "raid_bdev1", 00:23:26.384 "aliases": [ 00:23:26.384 "929d3561-428f-11ef-a0af-c98d8ee52a94" 00:23:26.384 ], 00:23:26.384 "product_name": "Raid Volume", 00:23:26.384 "block_size": 512, 00:23:26.384 "num_blocks": 63488, 00:23:26.384 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:26.384 "assigned_rate_limits": { 00:23:26.384 "rw_ios_per_sec": 0, 00:23:26.384 "rw_mbytes_per_sec": 0, 00:23:26.384 "r_mbytes_per_sec": 0, 00:23:26.384 "w_mbytes_per_sec": 0 00:23:26.384 }, 00:23:26.384 "claimed": false, 00:23:26.384 "zoned": false, 00:23:26.384 "supported_io_types": { 00:23:26.384 "read": true, 00:23:26.384 "write": true, 00:23:26.384 "unmap": false, 00:23:26.384 "flush": false, 00:23:26.384 "reset": true, 00:23:26.384 "nvme_admin": false, 00:23:26.384 "nvme_io": false, 00:23:26.384 "nvme_io_md": false, 00:23:26.384 "write_zeroes": true, 00:23:26.384 "zcopy": false, 00:23:26.384 "get_zone_info": false, 00:23:26.384 "zone_management": false, 00:23:26.384 "zone_append": false, 00:23:26.384 "compare": false, 00:23:26.384 "compare_and_write": false, 00:23:26.384 "abort": false, 00:23:26.384 "seek_hole": false, 00:23:26.384 "seek_data": false, 00:23:26.384 "copy": false, 00:23:26.384 "nvme_iov_md": false 00:23:26.384 }, 00:23:26.384 "memory_domains": [ 00:23:26.384 { 00:23:26.384 "dma_device_id": "system", 00:23:26.384 "dma_device_type": 1 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.384 "dma_device_type": 2 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "dma_device_id": "system", 00:23:26.384 "dma_device_type": 1 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.384 "dma_device_type": 2 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "dma_device_id": "system", 00:23:26.384 "dma_device_type": 1 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.384 "dma_device_type": 2 00:23:26.384 } 00:23:26.384 ], 00:23:26.384 "driver_specific": { 00:23:26.384 "raid": { 00:23:26.384 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:26.384 "strip_size_kb": 0, 00:23:26.384 "state": "online", 00:23:26.384 "raid_level": "raid1", 00:23:26.384 "superblock": true, 00:23:26.384 "num_base_bdevs": 3, 00:23:26.384 "num_base_bdevs_discovered": 3, 00:23:26.384 "num_base_bdevs_operational": 3, 00:23:26.384 "base_bdevs_list": [ 00:23:26.384 { 00:23:26.384 "name": "pt1", 00:23:26.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.384 "is_configured": true, 00:23:26.384 "data_offset": 2048, 00:23:26.384 "data_size": 63488 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "name": "pt2", 00:23:26.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.384 "is_configured": true, 00:23:26.384 "data_offset": 2048, 00:23:26.384 "data_size": 63488 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "name": "pt3", 00:23:26.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:26.384 "is_configured": true, 00:23:26.384 "data_offset": 2048, 00:23:26.384 "data_size": 63488 00:23:26.384 } 00:23:26.384 ] 00:23:26.384 } 00:23:26.384 } 00:23:26.384 }' 00:23:26.384 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:26.384 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:26.384 pt2 00:23:26.384 pt3' 00:23:26.384 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:26.384 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:26.384 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:26.643 "name": "pt1", 00:23:26.643 "aliases": [ 00:23:26.643 "00000000-0000-0000-0000-000000000001" 00:23:26.643 ], 00:23:26.643 "product_name": "passthru", 00:23:26.643 "block_size": 512, 00:23:26.643 "num_blocks": 65536, 00:23:26.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.643 "assigned_rate_limits": { 00:23:26.643 "rw_ios_per_sec": 0, 00:23:26.643 "rw_mbytes_per_sec": 0, 00:23:26.643 "r_mbytes_per_sec": 0, 00:23:26.643 "w_mbytes_per_sec": 0 00:23:26.643 }, 00:23:26.643 "claimed": true, 00:23:26.643 "claim_type": "exclusive_write", 00:23:26.643 "zoned": false, 00:23:26.643 "supported_io_types": { 00:23:26.643 "read": true, 00:23:26.643 "write": true, 00:23:26.643 "unmap": true, 00:23:26.643 "flush": true, 00:23:26.643 "reset": true, 00:23:26.643 "nvme_admin": false, 00:23:26.643 "nvme_io": false, 00:23:26.643 "nvme_io_md": false, 00:23:26.643 "write_zeroes": true, 00:23:26.643 "zcopy": true, 00:23:26.643 "get_zone_info": false, 00:23:26.643 "zone_management": false, 00:23:26.643 "zone_append": false, 00:23:26.643 "compare": false, 00:23:26.643 "compare_and_write": false, 00:23:26.643 "abort": true, 00:23:26.643 "seek_hole": false, 00:23:26.643 "seek_data": false, 00:23:26.643 "copy": true, 00:23:26.643 "nvme_iov_md": false 00:23:26.643 }, 00:23:26.643 "memory_domains": [ 00:23:26.643 { 00:23:26.643 "dma_device_id": "system", 00:23:26.643 "dma_device_type": 1 00:23:26.643 }, 00:23:26.643 { 00:23:26.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.643 "dma_device_type": 2 00:23:26.643 } 00:23:26.643 ], 00:23:26.643 "driver_specific": { 00:23:26.643 "passthru": { 00:23:26.643 "name": "pt1", 00:23:26.643 "base_bdev_name": "malloc1" 00:23:26.643 } 00:23:26.643 } 00:23:26.643 }' 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.643 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.934 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:26.934 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:26.935 "name": "pt2", 00:23:26.935 "aliases": [ 00:23:26.935 "00000000-0000-0000-0000-000000000002" 00:23:26.935 ], 00:23:26.935 "product_name": "passthru", 00:23:26.935 "block_size": 512, 00:23:26.935 "num_blocks": 65536, 00:23:26.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.935 "assigned_rate_limits": { 00:23:26.935 "rw_ios_per_sec": 0, 00:23:26.935 "rw_mbytes_per_sec": 0, 00:23:26.935 "r_mbytes_per_sec": 0, 00:23:26.935 "w_mbytes_per_sec": 0 00:23:26.935 }, 00:23:26.935 "claimed": true, 00:23:26.935 "claim_type": "exclusive_write", 00:23:26.935 "zoned": false, 00:23:26.935 "supported_io_types": { 00:23:26.935 "read": true, 00:23:26.935 "write": true, 00:23:26.935 "unmap": true, 00:23:26.935 "flush": true, 00:23:26.935 "reset": true, 00:23:26.935 "nvme_admin": false, 00:23:26.935 "nvme_io": false, 00:23:26.935 "nvme_io_md": false, 00:23:26.935 "write_zeroes": true, 00:23:26.935 "zcopy": true, 00:23:26.935 "get_zone_info": false, 00:23:26.935 "zone_management": false, 00:23:26.935 "zone_append": false, 00:23:26.935 "compare": false, 00:23:26.935 "compare_and_write": false, 00:23:26.935 "abort": true, 00:23:26.935 "seek_hole": false, 00:23:26.935 "seek_data": false, 00:23:26.935 "copy": true, 00:23:26.935 "nvme_iov_md": false 00:23:26.935 }, 00:23:26.935 "memory_domains": [ 00:23:26.935 { 00:23:26.935 "dma_device_id": "system", 00:23:26.935 "dma_device_type": 1 00:23:26.935 }, 00:23:26.935 { 00:23:26.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.935 "dma_device_type": 2 00:23:26.935 } 00:23:26.935 ], 00:23:26.935 "driver_specific": { 00:23:26.935 "passthru": { 00:23:26.935 "name": "pt2", 00:23:26.935 "base_bdev_name": "malloc2" 00:23:26.935 } 00:23:26.935 } 00:23:26.935 }' 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:26.935 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.935 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.935 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:26.935 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.935 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.935 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:26.935 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.193 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.193 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:27.193 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:27.193 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:27.193 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:27.193 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:27.193 "name": "pt3", 00:23:27.193 "aliases": [ 00:23:27.193 "00000000-0000-0000-0000-000000000003" 00:23:27.193 ], 00:23:27.193 "product_name": "passthru", 00:23:27.193 "block_size": 512, 00:23:27.193 "num_blocks": 65536, 00:23:27.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:27.193 "assigned_rate_limits": { 00:23:27.193 "rw_ios_per_sec": 0, 00:23:27.193 "rw_mbytes_per_sec": 0, 00:23:27.193 "r_mbytes_per_sec": 0, 00:23:27.193 "w_mbytes_per_sec": 0 00:23:27.193 }, 00:23:27.193 "claimed": true, 00:23:27.193 "claim_type": "exclusive_write", 00:23:27.193 "zoned": false, 00:23:27.193 "supported_io_types": { 00:23:27.193 "read": true, 00:23:27.193 "write": true, 00:23:27.193 "unmap": true, 00:23:27.193 "flush": true, 00:23:27.193 "reset": true, 00:23:27.193 "nvme_admin": false, 00:23:27.193 "nvme_io": false, 00:23:27.193 "nvme_io_md": false, 00:23:27.193 "write_zeroes": true, 00:23:27.193 "zcopy": true, 00:23:27.193 "get_zone_info": false, 00:23:27.193 "zone_management": false, 00:23:27.193 "zone_append": false, 00:23:27.193 "compare": false, 00:23:27.193 "compare_and_write": false, 00:23:27.193 "abort": true, 00:23:27.193 "seek_hole": false, 00:23:27.193 "seek_data": false, 00:23:27.193 "copy": true, 00:23:27.193 "nvme_iov_md": false 00:23:27.193 }, 00:23:27.193 "memory_domains": [ 00:23:27.193 { 00:23:27.193 "dma_device_id": "system", 00:23:27.193 "dma_device_type": 1 00:23:27.193 }, 00:23:27.193 { 00:23:27.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.194 "dma_device_type": 2 00:23:27.194 } 00:23:27.194 ], 00:23:27.194 "driver_specific": { 00:23:27.194 "passthru": { 00:23:27.194 "name": "pt3", 00:23:27.194 "base_bdev_name": "malloc3" 00:23:27.194 } 00:23:27.194 } 00:23:27.194 }' 00:23:27.194 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:27.452 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:27.711 [2024-07-15 09:49:55.567240] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.711 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 929d3561-428f-11ef-a0af-c98d8ee52a94 '!=' 929d3561-428f-11ef-a0af-c98d8ee52a94 ']' 00:23:27.711 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:27.711 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:27.711 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:27.711 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:27.711 [2024-07-15 09:49:55.803227] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.970 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.970 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.970 "name": "raid_bdev1", 00:23:27.970 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:27.970 "strip_size_kb": 0, 00:23:27.970 "state": "online", 00:23:27.970 "raid_level": "raid1", 00:23:27.970 "superblock": true, 00:23:27.970 "num_base_bdevs": 3, 00:23:27.970 "num_base_bdevs_discovered": 2, 00:23:27.970 "num_base_bdevs_operational": 2, 00:23:27.970 "base_bdevs_list": [ 00:23:27.970 { 00:23:27.970 "name": null, 00:23:27.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.970 "is_configured": false, 00:23:27.970 "data_offset": 2048, 00:23:27.970 "data_size": 63488 00:23:27.970 }, 00:23:27.970 { 00:23:27.970 "name": "pt2", 00:23:27.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:27.970 "is_configured": true, 00:23:27.970 "data_offset": 2048, 00:23:27.970 "data_size": 63488 00:23:27.970 }, 00:23:27.970 { 00:23:27.970 "name": "pt3", 00:23:27.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:27.970 "is_configured": true, 00:23:27.970 "data_offset": 2048, 00:23:27.970 "data_size": 63488 00:23:27.970 } 00:23:27.970 ] 00:23:27.970 }' 00:23:27.970 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.970 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.538 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:28.538 [2024-07-15 09:49:56.631296] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.538 [2024-07-15 09:49:56.631330] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:28.538 [2024-07-15 09:49:56.631354] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:28.538 [2024-07-15 09:49:56.631373] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:28.538 [2024-07-15 09:49:56.631377] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f1edb234780 name raid_bdev1, state offline 00:23:28.796 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:28.797 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.055 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:29.055 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:29.055 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:29.055 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:29.055 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:29.055 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:29.055 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:29.055 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:29.333 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:29.333 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:29.333 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:29.333 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:29.333 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:29.617 [2024-07-15 09:49:57.535322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:29.617 [2024-07-15 09:49:57.535384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.617 [2024-07-15 09:49:57.535394] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb235400 00:23:29.617 [2024-07-15 09:49:57.535400] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.617 [2024-07-15 09:49:57.536185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.617 [2024-07-15 09:49:57.536217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:29.617 [2024-07-15 09:49:57.536241] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:29.617 [2024-07-15 09:49:57.536252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.617 pt2 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.618 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.877 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.877 "name": "raid_bdev1", 00:23:29.877 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:29.877 "strip_size_kb": 0, 00:23:29.877 "state": "configuring", 00:23:29.877 "raid_level": "raid1", 00:23:29.877 "superblock": true, 00:23:29.877 "num_base_bdevs": 3, 00:23:29.877 "num_base_bdevs_discovered": 1, 00:23:29.877 "num_base_bdevs_operational": 2, 00:23:29.877 "base_bdevs_list": [ 00:23:29.877 { 00:23:29.877 "name": null, 00:23:29.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.877 "is_configured": false, 00:23:29.877 "data_offset": 2048, 00:23:29.877 "data_size": 63488 00:23:29.877 }, 00:23:29.878 { 00:23:29.878 "name": "pt2", 00:23:29.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.878 "is_configured": true, 00:23:29.878 "data_offset": 2048, 00:23:29.878 "data_size": 63488 00:23:29.878 }, 00:23:29.878 { 00:23:29.878 "name": null, 00:23:29.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:29.878 "is_configured": false, 00:23:29.878 "data_offset": 2048, 00:23:29.878 "data_size": 63488 00:23:29.878 } 00:23:29.878 ] 00:23:29.878 }' 00:23:29.878 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.878 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.137 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:30.137 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:30.137 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:23:30.137 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:30.396 [2024-07-15 09:49:58.259372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:30.396 [2024-07-15 09:49:58.259440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.396 [2024-07-15 09:49:58.259451] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb234780 00:23:30.396 [2024-07-15 09:49:58.259457] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.396 [2024-07-15 09:49:58.259574] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.396 [2024-07-15 09:49:58.259581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:30.396 [2024-07-15 09:49:58.259600] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:30.396 [2024-07-15 09:49:58.259608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:30.396 [2024-07-15 09:49:58.259633] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f1edb235180 00:23:30.396 [2024-07-15 09:49:58.259636] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:30.396 [2024-07-15 09:49:58.259653] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f1edb297e20 00:23:30.396 [2024-07-15 09:49:58.259690] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f1edb235180 00:23:30.396 [2024-07-15 09:49:58.259693] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f1edb235180 00:23:30.396 [2024-07-15 09:49:58.259708] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.396 pt3 00:23:30.396 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:30.396 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.396 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.396 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.396 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.396 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.397 "name": "raid_bdev1", 00:23:30.397 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:30.397 "strip_size_kb": 0, 00:23:30.397 "state": "online", 00:23:30.397 "raid_level": "raid1", 00:23:30.397 "superblock": true, 00:23:30.397 "num_base_bdevs": 3, 00:23:30.397 "num_base_bdevs_discovered": 2, 00:23:30.397 "num_base_bdevs_operational": 2, 00:23:30.397 "base_bdevs_list": [ 00:23:30.397 { 00:23:30.397 "name": null, 00:23:30.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.397 "is_configured": false, 00:23:30.397 "data_offset": 2048, 00:23:30.397 "data_size": 63488 00:23:30.397 }, 00:23:30.397 { 00:23:30.397 "name": "pt2", 00:23:30.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.397 "is_configured": true, 00:23:30.397 "data_offset": 2048, 00:23:30.397 "data_size": 63488 00:23:30.397 }, 00:23:30.397 { 00:23:30.397 "name": "pt3", 00:23:30.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.397 "is_configured": true, 00:23:30.397 "data_offset": 2048, 00:23:30.397 "data_size": 63488 00:23:30.397 } 00:23:30.397 ] 00:23:30.397 }' 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.397 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.966 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.966 [2024-07-15 09:49:58.975383] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.966 [2024-07-15 09:49:58.975408] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.966 [2024-07-15 09:49:58.975424] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.966 [2024-07-15 09:49:58.975435] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.966 [2024-07-15 09:49:58.975439] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f1edb235180 name raid_bdev1, state offline 00:23:30.966 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.966 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:31.225 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:31.225 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:31.225 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:23:31.225 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:23:31.225 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:31.484 [2024-07-15 09:49:59.551428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:31.484 [2024-07-15 09:49:59.551485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.484 [2024-07-15 09:49:59.551494] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb234780 00:23:31.484 [2024-07-15 09:49:59.551501] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.484 [2024-07-15 09:49:59.552277] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.484 [2024-07-15 09:49:59.552304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:31.484 [2024-07-15 09:49:59.552326] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:31.484 [2024-07-15 09:49:59.552353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:31.484 [2024-07-15 09:49:59.552379] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:31.484 [2024-07-15 09:49:59.552383] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.484 [2024-07-15 09:49:59.552387] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f1edb235180 name raid_bdev1, state configuring 00:23:31.484 [2024-07-15 09:49:59.552393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:31.484 pt1 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.484 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.742 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.742 "name": "raid_bdev1", 00:23:31.743 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:31.743 "strip_size_kb": 0, 00:23:31.743 "state": "configuring", 00:23:31.743 "raid_level": "raid1", 00:23:31.743 "superblock": true, 00:23:31.743 "num_base_bdevs": 3, 00:23:31.743 "num_base_bdevs_discovered": 1, 00:23:31.743 "num_base_bdevs_operational": 2, 00:23:31.743 "base_bdevs_list": [ 00:23:31.743 { 00:23:31.743 "name": null, 00:23:31.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.743 "is_configured": false, 00:23:31.743 "data_offset": 2048, 00:23:31.743 "data_size": 63488 00:23:31.743 }, 00:23:31.743 { 00:23:31.743 "name": "pt2", 00:23:31.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.743 "is_configured": true, 00:23:31.743 "data_offset": 2048, 00:23:31.743 "data_size": 63488 00:23:31.743 }, 00:23:31.743 { 00:23:31.743 "name": null, 00:23:31.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:31.743 "is_configured": false, 00:23:31.743 "data_offset": 2048, 00:23:31.743 "data_size": 63488 00:23:31.743 } 00:23:31.743 ] 00:23:31.743 }' 00:23:31.743 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.743 09:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.002 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:32.002 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:32.261 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:23:32.261 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:32.520 [2024-07-15 09:50:00.491516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:32.520 [2024-07-15 09:50:00.491582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.520 [2024-07-15 09:50:00.491592] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f1edb234c80 00:23:32.520 [2024-07-15 09:50:00.491599] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.521 [2024-07-15 09:50:00.491704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.521 [2024-07-15 09:50:00.491712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:32.521 [2024-07-15 09:50:00.491728] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:32.521 [2024-07-15 09:50:00.491734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:32.521 [2024-07-15 09:50:00.491758] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f1edb235180 00:23:32.521 [2024-07-15 09:50:00.491762] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:32.521 [2024-07-15 09:50:00.491780] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f1edb297e20 00:23:32.521 [2024-07-15 09:50:00.491813] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f1edb235180 00:23:32.521 [2024-07-15 09:50:00.491816] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f1edb235180 00:23:32.521 [2024-07-15 09:50:00.491832] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.521 pt3 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.521 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.779 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.779 "name": "raid_bdev1", 00:23:32.779 "uuid": "929d3561-428f-11ef-a0af-c98d8ee52a94", 00:23:32.779 "strip_size_kb": 0, 00:23:32.779 "state": "online", 00:23:32.779 "raid_level": "raid1", 00:23:32.779 "superblock": true, 00:23:32.779 "num_base_bdevs": 3, 00:23:32.779 "num_base_bdevs_discovered": 2, 00:23:32.779 "num_base_bdevs_operational": 2, 00:23:32.779 "base_bdevs_list": [ 00:23:32.779 { 00:23:32.779 "name": null, 00:23:32.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.779 "is_configured": false, 00:23:32.779 "data_offset": 2048, 00:23:32.779 "data_size": 63488 00:23:32.779 }, 00:23:32.779 { 00:23:32.779 "name": "pt2", 00:23:32.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:32.779 "is_configured": true, 00:23:32.779 "data_offset": 2048, 00:23:32.779 "data_size": 63488 00:23:32.779 }, 00:23:32.779 { 00:23:32.779 "name": "pt3", 00:23:32.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:32.779 "is_configured": true, 00:23:32.779 "data_offset": 2048, 00:23:32.779 "data_size": 63488 00:23:32.779 } 00:23:32.779 ] 00:23:32.779 }' 00:23:32.779 09:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.779 09:50:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.037 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:33.037 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:33.296 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:33.296 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:33.296 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:33.554 [2024-07-15 09:50:01.487603] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 929d3561-428f-11ef-a0af-c98d8ee52a94 '!=' 929d3561-428f-11ef-a0af-c98d8ee52a94 ']' 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57433 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57433 ']' 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57433 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57433 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:23:33.554 killing process with pid 57433 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57433' 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57433 00:23:33.554 [2024-07-15 09:50:01.523786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:33.554 [2024-07-15 09:50:01.523819] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.554 [2024-07-15 09:50:01.523835] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.554 [2024-07-15 09:50:01.523839] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f1edb235180 name raid_bdev1, state offline 00:23:33.554 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57433 00:23:33.554 [2024-07-15 09:50:01.550713] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:33.813 09:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:33.813 00:23:33.813 real 0m16.278s 00:23:33.813 user 0m28.713s 00:23:33.813 sys 0m3.078s 00:23:33.813 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.813 09:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.813 ************************************ 00:23:33.813 END TEST raid_superblock_test 00:23:33.813 ************************************ 00:23:33.813 09:50:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:33.813 09:50:01 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:23:33.813 09:50:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:33.813 09:50:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.813 09:50:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:33.813 ************************************ 00:23:33.813 START TEST raid_read_error_test 00:23:33.813 ************************************ 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:33.813 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jqpKxYBvG8 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=57975 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 57975 /var/tmp/spdk-raid.sock 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 57975 ']' 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:33.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.814 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.814 [2024-07-15 09:50:01.875716] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:33.814 [2024-07-15 09:50:01.875973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:23:34.747 EAL: TSC is not safe to use in SMP mode 00:23:34.747 EAL: TSC is not invariant 00:23:34.747 [2024-07-15 09:50:02.625488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.747 [2024-07-15 09:50:02.743065] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:34.747 [2024-07-15 09:50:02.745683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.747 [2024-07-15 09:50:02.746456] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.747 [2024-07-15 09:50:02.746469] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.005 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.005 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:35.005 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:35.005 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:35.270 BaseBdev1_malloc 00:23:35.270 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:35.270 true 00:23:35.270 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:35.534 [2024-07-15 09:50:03.537968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:35.534 [2024-07-15 09:50:03.538046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.534 [2024-07-15 09:50:03.538079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e2a9ce34780 00:23:35.534 [2024-07-15 09:50:03.538086] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.534 [2024-07-15 09:50:03.538860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.534 [2024-07-15 09:50:03.538889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:35.534 BaseBdev1 00:23:35.534 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:35.534 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:35.793 BaseBdev2_malloc 00:23:35.793 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:36.051 true 00:23:36.051 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:36.309 [2024-07-15 09:50:04.238012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:36.309 [2024-07-15 09:50:04.238086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.309 [2024-07-15 09:50:04.238124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e2a9ce34c80 00:23:36.309 [2024-07-15 09:50:04.238132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.309 [2024-07-15 09:50:04.239012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.309 [2024-07-15 09:50:04.239045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:36.309 BaseBdev2 00:23:36.309 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:36.309 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:36.567 BaseBdev3_malloc 00:23:36.567 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:36.567 true 00:23:36.824 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:36.824 [2024-07-15 09:50:04.866029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:36.824 [2024-07-15 09:50:04.866098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.824 [2024-07-15 09:50:04.866127] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e2a9ce35180 00:23:36.824 [2024-07-15 09:50:04.866134] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.825 [2024-07-15 09:50:04.866822] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.825 [2024-07-15 09:50:04.866847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:36.825 BaseBdev3 00:23:36.825 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:37.084 [2024-07-15 09:50:05.074042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:37.084 [2024-07-15 09:50:05.074657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:37.084 [2024-07-15 09:50:05.074682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:37.084 [2024-07-15 09:50:05.074733] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e2a9ce35400 00:23:37.084 [2024-07-15 09:50:05.074738] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:37.084 [2024-07-15 09:50:05.074768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e2a9cea0e20 00:23:37.084 [2024-07-15 09:50:05.074839] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e2a9ce35400 00:23:37.084 [2024-07-15 09:50:05.074843] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e2a9ce35400 00:23:37.084 [2024-07-15 09:50:05.074861] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.084 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.342 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.342 "name": "raid_bdev1", 00:23:37.342 "uuid": "9cd51080-428f-11ef-a0af-c98d8ee52a94", 00:23:37.342 "strip_size_kb": 0, 00:23:37.342 "state": "online", 00:23:37.342 "raid_level": "raid1", 00:23:37.342 "superblock": true, 00:23:37.342 "num_base_bdevs": 3, 00:23:37.342 "num_base_bdevs_discovered": 3, 00:23:37.342 "num_base_bdevs_operational": 3, 00:23:37.342 "base_bdevs_list": [ 00:23:37.342 { 00:23:37.342 "name": "BaseBdev1", 00:23:37.342 "uuid": "00239248-40a9-c85c-926c-f599626aae03", 00:23:37.342 "is_configured": true, 00:23:37.342 "data_offset": 2048, 00:23:37.342 "data_size": 63488 00:23:37.342 }, 00:23:37.342 { 00:23:37.342 "name": "BaseBdev2", 00:23:37.342 "uuid": "f4607375-8216-9653-b6b8-5d7e5233dc09", 00:23:37.342 "is_configured": true, 00:23:37.342 "data_offset": 2048, 00:23:37.342 "data_size": 63488 00:23:37.342 }, 00:23:37.342 { 00:23:37.342 "name": "BaseBdev3", 00:23:37.342 "uuid": "63ac1a1a-8e17-e052-a46a-f2a6e9145566", 00:23:37.342 "is_configured": true, 00:23:37.342 "data_offset": 2048, 00:23:37.342 "data_size": 63488 00:23:37.342 } 00:23:37.342 ] 00:23:37.342 }' 00:23:37.342 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.342 09:50:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.601 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:37.601 09:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:37.601 [2024-07-15 09:50:05.690166] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e2a9cea0ec0 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.993 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.994 09:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.252 09:50:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.252 "name": "raid_bdev1", 00:23:39.252 "uuid": "9cd51080-428f-11ef-a0af-c98d8ee52a94", 00:23:39.252 "strip_size_kb": 0, 00:23:39.252 "state": "online", 00:23:39.252 "raid_level": "raid1", 00:23:39.252 "superblock": true, 00:23:39.252 "num_base_bdevs": 3, 00:23:39.252 "num_base_bdevs_discovered": 3, 00:23:39.252 "num_base_bdevs_operational": 3, 00:23:39.252 "base_bdevs_list": [ 00:23:39.252 { 00:23:39.252 "name": "BaseBdev1", 00:23:39.252 "uuid": "00239248-40a9-c85c-926c-f599626aae03", 00:23:39.252 "is_configured": true, 00:23:39.252 "data_offset": 2048, 00:23:39.252 "data_size": 63488 00:23:39.252 }, 00:23:39.252 { 00:23:39.252 "name": "BaseBdev2", 00:23:39.252 "uuid": "f4607375-8216-9653-b6b8-5d7e5233dc09", 00:23:39.252 "is_configured": true, 00:23:39.252 "data_offset": 2048, 00:23:39.252 "data_size": 63488 00:23:39.252 }, 00:23:39.252 { 00:23:39.252 "name": "BaseBdev3", 00:23:39.252 "uuid": "63ac1a1a-8e17-e052-a46a-f2a6e9145566", 00:23:39.252 "is_configured": true, 00:23:39.252 "data_offset": 2048, 00:23:39.252 "data_size": 63488 00:23:39.252 } 00:23:39.252 ] 00:23:39.252 }' 00:23:39.252 09:50:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.252 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.511 09:50:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:39.770 [2024-07-15 09:50:07.784014] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.770 [2024-07-15 09:50:07.784054] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.770 [2024-07-15 09:50:07.784405] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.770 [2024-07-15 09:50:07.784417] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.770 [2024-07-15 09:50:07.784437] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.770 [2024-07-15 09:50:07.784442] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e2a9ce35400 name raid_bdev1, state offline 00:23:39.770 0 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 57975 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 57975 ']' 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 57975 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 57975 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:23:39.770 killing process with pid 57975 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57975' 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 57975 00:23:39.770 [2024-07-15 09:50:07.817338] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:39.770 09:50:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 57975 00:23:39.770 [2024-07-15 09:50:07.843668] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jqpKxYBvG8 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:40.047 00:23:40.047 real 0m6.256s 00:23:40.047 user 0m9.220s 00:23:40.047 sys 0m1.437s 00:23:40.047 ************************************ 00:23:40.047 END TEST raid_read_error_test 00:23:40.047 ************************************ 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.047 09:50:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.047 09:50:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:40.047 09:50:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:23:40.047 09:50:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:40.047 09:50:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.047 09:50:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:40.304 ************************************ 00:23:40.304 START TEST raid_write_error_test 00:23:40.304 ************************************ 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.FU9oY2C5Yz 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58106 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58106 /var/tmp/spdk-raid.sock 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 58106 ']' 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.304 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:40.304 [2024-07-15 09:50:08.183195] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:40.304 [2024-07-15 09:50:08.183532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:23:40.870 EAL: TSC is not safe to use in SMP mode 00:23:40.870 EAL: TSC is not invariant 00:23:40.870 [2024-07-15 09:50:08.911804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.127 [2024-07-15 09:50:09.020801] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:41.127 [2024-07-15 09:50:09.023323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.127 [2024-07-15 09:50:09.024088] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.127 [2024-07-15 09:50:09.024101] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.386 09:50:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.386 09:50:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:41.386 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:41.386 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:41.644 BaseBdev1_malloc 00:23:41.645 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:41.905 true 00:23:41.905 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:42.164 [2024-07-15 09:50:10.103383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:42.164 [2024-07-15 09:50:10.103469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.164 [2024-07-15 09:50:10.103499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c5bf9c34780 00:23:42.164 [2024-07-15 09:50:10.103507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.164 [2024-07-15 09:50:10.104216] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.164 [2024-07-15 09:50:10.104245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:42.164 BaseBdev1 00:23:42.164 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:42.164 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:42.423 BaseBdev2_malloc 00:23:42.423 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:42.423 true 00:23:42.423 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:42.680 [2024-07-15 09:50:10.723410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:42.680 [2024-07-15 09:50:10.723492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.680 [2024-07-15 09:50:10.723525] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c5bf9c34c80 00:23:42.680 [2024-07-15 09:50:10.723533] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.680 [2024-07-15 09:50:10.724228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.680 [2024-07-15 09:50:10.724257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:42.680 BaseBdev2 00:23:42.680 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:42.680 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:42.938 BaseBdev3_malloc 00:23:42.938 09:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:43.196 true 00:23:43.196 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:43.454 [2024-07-15 09:50:11.403464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:43.454 [2024-07-15 09:50:11.403536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.454 [2024-07-15 09:50:11.403570] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c5bf9c35180 00:23:43.454 [2024-07-15 09:50:11.403578] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.454 [2024-07-15 09:50:11.404391] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.454 [2024-07-15 09:50:11.404419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:43.454 BaseBdev3 00:23:43.454 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:43.712 [2024-07-15 09:50:11.631539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.712 [2024-07-15 09:50:11.632528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.712 [2024-07-15 09:50:11.632560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:43.712 [2024-07-15 09:50:11.632620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2c5bf9c35400 00:23:43.712 [2024-07-15 09:50:11.632626] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:43.712 [2024-07-15 09:50:11.632663] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2c5bf9ca0e20 00:23:43.712 [2024-07-15 09:50:11.632743] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2c5bf9c35400 00:23:43.712 [2024-07-15 09:50:11.632746] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2c5bf9c35400 00:23:43.712 [2024-07-15 09:50:11.632772] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.712 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.969 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.969 "name": "raid_bdev1", 00:23:43.969 "uuid": "a0bda81f-428f-11ef-a0af-c98d8ee52a94", 00:23:43.969 "strip_size_kb": 0, 00:23:43.969 "state": "online", 00:23:43.969 "raid_level": "raid1", 00:23:43.969 "superblock": true, 00:23:43.969 "num_base_bdevs": 3, 00:23:43.969 "num_base_bdevs_discovered": 3, 00:23:43.969 "num_base_bdevs_operational": 3, 00:23:43.969 "base_bdevs_list": [ 00:23:43.969 { 00:23:43.969 "name": "BaseBdev1", 00:23:43.969 "uuid": "6b162462-2d2a-635a-92f7-509e87645c65", 00:23:43.969 "is_configured": true, 00:23:43.969 "data_offset": 2048, 00:23:43.969 "data_size": 63488 00:23:43.969 }, 00:23:43.969 { 00:23:43.969 "name": "BaseBdev2", 00:23:43.969 "uuid": "3e41a471-4297-0a57-aa5d-b5f2a2cf807e", 00:23:43.969 "is_configured": true, 00:23:43.969 "data_offset": 2048, 00:23:43.969 "data_size": 63488 00:23:43.969 }, 00:23:43.969 { 00:23:43.969 "name": "BaseBdev3", 00:23:43.969 "uuid": "a1d8fd2c-5541-4f52-92b3-ea597d5eedb6", 00:23:43.969 "is_configured": true, 00:23:43.969 "data_offset": 2048, 00:23:43.969 "data_size": 63488 00:23:43.969 } 00:23:43.969 ] 00:23:43.969 }' 00:23:43.969 09:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.969 09:50:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.226 09:50:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:44.226 09:50:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:44.226 [2024-07-15 09:50:12.267617] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2c5bf9ca0ec0 00:23:45.156 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:45.431 [2024-07-15 09:50:13.386637] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:45.431 [2024-07-15 09:50:13.386703] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:45.431 [2024-07-15 09:50:13.386834] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2c5bf9ca0ec0 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.431 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.694 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.694 "name": "raid_bdev1", 00:23:45.694 "uuid": "a0bda81f-428f-11ef-a0af-c98d8ee52a94", 00:23:45.694 "strip_size_kb": 0, 00:23:45.694 "state": "online", 00:23:45.694 "raid_level": "raid1", 00:23:45.694 "superblock": true, 00:23:45.694 "num_base_bdevs": 3, 00:23:45.694 "num_base_bdevs_discovered": 2, 00:23:45.694 "num_base_bdevs_operational": 2, 00:23:45.694 "base_bdevs_list": [ 00:23:45.694 { 00:23:45.694 "name": null, 00:23:45.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.694 "is_configured": false, 00:23:45.694 "data_offset": 2048, 00:23:45.694 "data_size": 63488 00:23:45.694 }, 00:23:45.694 { 00:23:45.694 "name": "BaseBdev2", 00:23:45.694 "uuid": "3e41a471-4297-0a57-aa5d-b5f2a2cf807e", 00:23:45.694 "is_configured": true, 00:23:45.694 "data_offset": 2048, 00:23:45.694 "data_size": 63488 00:23:45.694 }, 00:23:45.694 { 00:23:45.694 "name": "BaseBdev3", 00:23:45.694 "uuid": "a1d8fd2c-5541-4f52-92b3-ea597d5eedb6", 00:23:45.694 "is_configured": true, 00:23:45.694 "data_offset": 2048, 00:23:45.694 "data_size": 63488 00:23:45.694 } 00:23:45.694 ] 00:23:45.694 }' 00:23:45.694 09:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.694 09:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.259 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:46.518 [2024-07-15 09:50:14.541998] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.518 [2024-07-15 09:50:14.542032] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.518 [2024-07-15 09:50:14.542401] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.518 [2024-07-15 09:50:14.542415] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.518 [2024-07-15 09:50:14.542430] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.518 [2024-07-15 09:50:14.542434] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2c5bf9c35400 name raid_bdev1, state offline 00:23:46.518 0 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58106 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 58106 ']' 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 58106 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58106 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:23:46.518 killing process with pid 58106 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58106' 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 58106 00:23:46.518 [2024-07-15 09:50:14.575597] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:46.518 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 58106 00:23:46.518 [2024-07-15 09:50:14.601438] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.FU9oY2C5Yz 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:46.776 00:23:46.776 real 0m6.703s 00:23:46.776 user 0m10.242s 00:23:46.776 sys 0m1.393s 00:23:46.776 ************************************ 00:23:46.776 END TEST raid_write_error_test 00:23:46.776 ************************************ 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.776 09:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.035 09:50:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:47.035 09:50:14 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:23:47.035 09:50:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:47.035 09:50:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:23:47.035 09:50:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:47.035 09:50:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.035 09:50:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:47.035 ************************************ 00:23:47.035 START TEST raid_state_function_test 00:23:47.035 ************************************ 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58239 00:23:47.035 Process raid pid: 58239 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58239' 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58239 /var/tmp/spdk-raid.sock 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58239 ']' 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.035 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.035 [2024-07-15 09:50:14.914624] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:47.035 [2024-07-15 09:50:14.915016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:23:47.603 EAL: TSC is not safe to use in SMP mode 00:23:47.603 EAL: TSC is not invariant 00:23:47.603 [2024-07-15 09:50:15.658928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.861 [2024-07-15 09:50:15.766920] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:47.861 [2024-07-15 09:50:15.769433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.861 [2024-07-15 09:50:15.770182] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.861 [2024-07-15 09:50:15.770194] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:48.118 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.118 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:23:48.118 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:48.376 [2024-07-15 09:50:16.361528] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:48.376 [2024-07-15 09:50:16.361611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:48.376 [2024-07-15 09:50:16.361616] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:48.376 [2024-07-15 09:50:16.361625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:48.376 [2024-07-15 09:50:16.361628] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:48.376 [2024-07-15 09:50:16.361635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:48.376 [2024-07-15 09:50:16.361638] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:48.376 [2024-07-15 09:50:16.361654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.376 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.637 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.637 "name": "Existed_Raid", 00:23:48.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.637 "strip_size_kb": 64, 00:23:48.637 "state": "configuring", 00:23:48.637 "raid_level": "raid0", 00:23:48.637 "superblock": false, 00:23:48.637 "num_base_bdevs": 4, 00:23:48.637 "num_base_bdevs_discovered": 0, 00:23:48.637 "num_base_bdevs_operational": 4, 00:23:48.637 "base_bdevs_list": [ 00:23:48.637 { 00:23:48.637 "name": "BaseBdev1", 00:23:48.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.637 "is_configured": false, 00:23:48.637 "data_offset": 0, 00:23:48.637 "data_size": 0 00:23:48.637 }, 00:23:48.637 { 00:23:48.637 "name": "BaseBdev2", 00:23:48.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.637 "is_configured": false, 00:23:48.637 "data_offset": 0, 00:23:48.637 "data_size": 0 00:23:48.637 }, 00:23:48.637 { 00:23:48.637 "name": "BaseBdev3", 00:23:48.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.637 "is_configured": false, 00:23:48.637 "data_offset": 0, 00:23:48.637 "data_size": 0 00:23:48.637 }, 00:23:48.637 { 00:23:48.637 "name": "BaseBdev4", 00:23:48.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.637 "is_configured": false, 00:23:48.637 "data_offset": 0, 00:23:48.637 "data_size": 0 00:23:48.637 } 00:23:48.637 ] 00:23:48.637 }' 00:23:48.637 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.637 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.895 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:49.153 [2024-07-15 09:50:17.157556] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:49.153 [2024-07-15 09:50:17.157601] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34688e634500 name Existed_Raid, state configuring 00:23:49.153 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:49.412 [2024-07-15 09:50:17.417586] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:49.412 [2024-07-15 09:50:17.417663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:49.412 [2024-07-15 09:50:17.417669] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:49.412 [2024-07-15 09:50:17.417677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:49.412 [2024-07-15 09:50:17.417681] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:49.412 [2024-07-15 09:50:17.417694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:49.412 [2024-07-15 09:50:17.417698] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:49.412 [2024-07-15 09:50:17.417704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:49.412 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:49.671 [2024-07-15 09:50:17.674819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:49.671 BaseBdev1 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:49.671 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:49.929 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:50.187 [ 00:23:50.187 { 00:23:50.187 "name": "BaseBdev1", 00:23:50.187 "aliases": [ 00:23:50.187 "a4579ac4-428f-11ef-a0af-c98d8ee52a94" 00:23:50.187 ], 00:23:50.187 "product_name": "Malloc disk", 00:23:50.187 "block_size": 512, 00:23:50.187 "num_blocks": 65536, 00:23:50.187 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:50.187 "assigned_rate_limits": { 00:23:50.187 "rw_ios_per_sec": 0, 00:23:50.187 "rw_mbytes_per_sec": 0, 00:23:50.187 "r_mbytes_per_sec": 0, 00:23:50.187 "w_mbytes_per_sec": 0 00:23:50.187 }, 00:23:50.187 "claimed": true, 00:23:50.187 "claim_type": "exclusive_write", 00:23:50.187 "zoned": false, 00:23:50.187 "supported_io_types": { 00:23:50.187 "read": true, 00:23:50.187 "write": true, 00:23:50.187 "unmap": true, 00:23:50.187 "flush": true, 00:23:50.187 "reset": true, 00:23:50.187 "nvme_admin": false, 00:23:50.187 "nvme_io": false, 00:23:50.187 "nvme_io_md": false, 00:23:50.187 "write_zeroes": true, 00:23:50.187 "zcopy": true, 00:23:50.187 "get_zone_info": false, 00:23:50.187 "zone_management": false, 00:23:50.187 "zone_append": false, 00:23:50.187 "compare": false, 00:23:50.187 "compare_and_write": false, 00:23:50.187 "abort": true, 00:23:50.187 "seek_hole": false, 00:23:50.187 "seek_data": false, 00:23:50.187 "copy": true, 00:23:50.187 "nvme_iov_md": false 00:23:50.187 }, 00:23:50.187 "memory_domains": [ 00:23:50.187 { 00:23:50.187 "dma_device_id": "system", 00:23:50.187 "dma_device_type": 1 00:23:50.187 }, 00:23:50.187 { 00:23:50.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.187 "dma_device_type": 2 00:23:50.187 } 00:23:50.187 ], 00:23:50.187 "driver_specific": {} 00:23:50.187 } 00:23:50.187 ] 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.187 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.446 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:50.446 "name": "Existed_Raid", 00:23:50.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.446 "strip_size_kb": 64, 00:23:50.446 "state": "configuring", 00:23:50.446 "raid_level": "raid0", 00:23:50.446 "superblock": false, 00:23:50.446 "num_base_bdevs": 4, 00:23:50.446 "num_base_bdevs_discovered": 1, 00:23:50.446 "num_base_bdevs_operational": 4, 00:23:50.446 "base_bdevs_list": [ 00:23:50.446 { 00:23:50.446 "name": "BaseBdev1", 00:23:50.446 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:50.446 "is_configured": true, 00:23:50.446 "data_offset": 0, 00:23:50.446 "data_size": 65536 00:23:50.446 }, 00:23:50.446 { 00:23:50.446 "name": "BaseBdev2", 00:23:50.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.446 "is_configured": false, 00:23:50.446 "data_offset": 0, 00:23:50.446 "data_size": 0 00:23:50.446 }, 00:23:50.446 { 00:23:50.446 "name": "BaseBdev3", 00:23:50.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.446 "is_configured": false, 00:23:50.446 "data_offset": 0, 00:23:50.446 "data_size": 0 00:23:50.446 }, 00:23:50.446 { 00:23:50.446 "name": "BaseBdev4", 00:23:50.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.446 "is_configured": false, 00:23:50.446 "data_offset": 0, 00:23:50.446 "data_size": 0 00:23:50.446 } 00:23:50.446 ] 00:23:50.446 }' 00:23:50.446 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:50.446 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.013 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:51.272 [2024-07-15 09:50:19.153675] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:51.272 [2024-07-15 09:50:19.153733] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34688e634500 name Existed_Raid, state configuring 00:23:51.272 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:51.537 [2024-07-15 09:50:19.417730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:51.537 [2024-07-15 09:50:19.418741] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:51.537 [2024-07-15 09:50:19.418799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:51.537 [2024-07-15 09:50:19.418804] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:51.537 [2024-07-15 09:50:19.418812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:51.537 [2024-07-15 09:50:19.418815] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:51.537 [2024-07-15 09:50:19.418822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.537 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.796 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.796 "name": "Existed_Raid", 00:23:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.796 "strip_size_kb": 64, 00:23:51.796 "state": "configuring", 00:23:51.796 "raid_level": "raid0", 00:23:51.796 "superblock": false, 00:23:51.796 "num_base_bdevs": 4, 00:23:51.796 "num_base_bdevs_discovered": 1, 00:23:51.796 "num_base_bdevs_operational": 4, 00:23:51.796 "base_bdevs_list": [ 00:23:51.796 { 00:23:51.796 "name": "BaseBdev1", 00:23:51.796 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:51.796 "is_configured": true, 00:23:51.796 "data_offset": 0, 00:23:51.796 "data_size": 65536 00:23:51.796 }, 00:23:51.796 { 00:23:51.796 "name": "BaseBdev2", 00:23:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.796 "is_configured": false, 00:23:51.796 "data_offset": 0, 00:23:51.796 "data_size": 0 00:23:51.796 }, 00:23:51.796 { 00:23:51.796 "name": "BaseBdev3", 00:23:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.796 "is_configured": false, 00:23:51.796 "data_offset": 0, 00:23:51.796 "data_size": 0 00:23:51.796 }, 00:23:51.796 { 00:23:51.796 "name": "BaseBdev4", 00:23:51.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.796 "is_configured": false, 00:23:51.796 "data_offset": 0, 00:23:51.796 "data_size": 0 00:23:51.796 } 00:23:51.796 ] 00:23:51.796 }' 00:23:51.796 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.796 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.055 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:52.623 [2024-07-15 09:50:20.457945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.623 BaseBdev2 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.623 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:52.882 [ 00:23:52.882 { 00:23:52.882 "name": "BaseBdev2", 00:23:52.882 "aliases": [ 00:23:52.882 "a6007048-428f-11ef-a0af-c98d8ee52a94" 00:23:52.882 ], 00:23:52.882 "product_name": "Malloc disk", 00:23:52.882 "block_size": 512, 00:23:52.882 "num_blocks": 65536, 00:23:52.882 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:52.882 "assigned_rate_limits": { 00:23:52.882 "rw_ios_per_sec": 0, 00:23:52.883 "rw_mbytes_per_sec": 0, 00:23:52.883 "r_mbytes_per_sec": 0, 00:23:52.883 "w_mbytes_per_sec": 0 00:23:52.883 }, 00:23:52.883 "claimed": true, 00:23:52.883 "claim_type": "exclusive_write", 00:23:52.883 "zoned": false, 00:23:52.883 "supported_io_types": { 00:23:52.883 "read": true, 00:23:52.883 "write": true, 00:23:52.883 "unmap": true, 00:23:52.883 "flush": true, 00:23:52.883 "reset": true, 00:23:52.883 "nvme_admin": false, 00:23:52.883 "nvme_io": false, 00:23:52.883 "nvme_io_md": false, 00:23:52.883 "write_zeroes": true, 00:23:52.883 "zcopy": true, 00:23:52.883 "get_zone_info": false, 00:23:52.883 "zone_management": false, 00:23:52.883 "zone_append": false, 00:23:52.883 "compare": false, 00:23:52.883 "compare_and_write": false, 00:23:52.883 "abort": true, 00:23:52.883 "seek_hole": false, 00:23:52.883 "seek_data": false, 00:23:52.883 "copy": true, 00:23:52.883 "nvme_iov_md": false 00:23:52.883 }, 00:23:52.883 "memory_domains": [ 00:23:52.883 { 00:23:52.883 "dma_device_id": "system", 00:23:52.883 "dma_device_type": 1 00:23:52.883 }, 00:23:52.883 { 00:23:52.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.883 "dma_device_type": 2 00:23:52.883 } 00:23:52.883 ], 00:23:52.883 "driver_specific": {} 00:23:52.883 } 00:23:52.883 ] 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.883 09:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.509 09:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:53.509 "name": "Existed_Raid", 00:23:53.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.509 "strip_size_kb": 64, 00:23:53.509 "state": "configuring", 00:23:53.509 "raid_level": "raid0", 00:23:53.509 "superblock": false, 00:23:53.509 "num_base_bdevs": 4, 00:23:53.509 "num_base_bdevs_discovered": 2, 00:23:53.509 "num_base_bdevs_operational": 4, 00:23:53.509 "base_bdevs_list": [ 00:23:53.509 { 00:23:53.509 "name": "BaseBdev1", 00:23:53.509 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:53.509 "is_configured": true, 00:23:53.509 "data_offset": 0, 00:23:53.509 "data_size": 65536 00:23:53.509 }, 00:23:53.509 { 00:23:53.509 "name": "BaseBdev2", 00:23:53.509 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:53.509 "is_configured": true, 00:23:53.509 "data_offset": 0, 00:23:53.509 "data_size": 65536 00:23:53.509 }, 00:23:53.509 { 00:23:53.509 "name": "BaseBdev3", 00:23:53.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.509 "is_configured": false, 00:23:53.509 "data_offset": 0, 00:23:53.509 "data_size": 0 00:23:53.509 }, 00:23:53.509 { 00:23:53.509 "name": "BaseBdev4", 00:23:53.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.509 "is_configured": false, 00:23:53.509 "data_offset": 0, 00:23:53.509 "data_size": 0 00:23:53.509 } 00:23:53.509 ] 00:23:53.509 }' 00:23:53.509 09:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:53.509 09:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.077 09:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:54.337 [2024-07-15 09:50:22.190029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:54.337 BaseBdev3 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:54.337 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:54.596 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:54.856 [ 00:23:54.856 { 00:23:54.856 "name": "BaseBdev3", 00:23:54.856 "aliases": [ 00:23:54.856 "a708bc55-428f-11ef-a0af-c98d8ee52a94" 00:23:54.856 ], 00:23:54.856 "product_name": "Malloc disk", 00:23:54.856 "block_size": 512, 00:23:54.856 "num_blocks": 65536, 00:23:54.856 "uuid": "a708bc55-428f-11ef-a0af-c98d8ee52a94", 00:23:54.856 "assigned_rate_limits": { 00:23:54.856 "rw_ios_per_sec": 0, 00:23:54.856 "rw_mbytes_per_sec": 0, 00:23:54.856 "r_mbytes_per_sec": 0, 00:23:54.856 "w_mbytes_per_sec": 0 00:23:54.856 }, 00:23:54.856 "claimed": true, 00:23:54.856 "claim_type": "exclusive_write", 00:23:54.856 "zoned": false, 00:23:54.856 "supported_io_types": { 00:23:54.856 "read": true, 00:23:54.856 "write": true, 00:23:54.856 "unmap": true, 00:23:54.856 "flush": true, 00:23:54.856 "reset": true, 00:23:54.856 "nvme_admin": false, 00:23:54.856 "nvme_io": false, 00:23:54.856 "nvme_io_md": false, 00:23:54.856 "write_zeroes": true, 00:23:54.856 "zcopy": true, 00:23:54.856 "get_zone_info": false, 00:23:54.856 "zone_management": false, 00:23:54.856 "zone_append": false, 00:23:54.856 "compare": false, 00:23:54.856 "compare_and_write": false, 00:23:54.856 "abort": true, 00:23:54.856 "seek_hole": false, 00:23:54.856 "seek_data": false, 00:23:54.856 "copy": true, 00:23:54.856 "nvme_iov_md": false 00:23:54.856 }, 00:23:54.856 "memory_domains": [ 00:23:54.856 { 00:23:54.856 "dma_device_id": "system", 00:23:54.856 "dma_device_type": 1 00:23:54.856 }, 00:23:54.856 { 00:23:54.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.856 "dma_device_type": 2 00:23:54.856 } 00:23:54.856 ], 00:23:54.856 "driver_specific": {} 00:23:54.856 } 00:23:54.856 ] 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.856 09:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.116 09:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.116 "name": "Existed_Raid", 00:23:55.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.116 "strip_size_kb": 64, 00:23:55.116 "state": "configuring", 00:23:55.116 "raid_level": "raid0", 00:23:55.116 "superblock": false, 00:23:55.116 "num_base_bdevs": 4, 00:23:55.116 "num_base_bdevs_discovered": 3, 00:23:55.116 "num_base_bdevs_operational": 4, 00:23:55.116 "base_bdevs_list": [ 00:23:55.116 { 00:23:55.116 "name": "BaseBdev1", 00:23:55.116 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:55.116 "is_configured": true, 00:23:55.116 "data_offset": 0, 00:23:55.116 "data_size": 65536 00:23:55.116 }, 00:23:55.116 { 00:23:55.116 "name": "BaseBdev2", 00:23:55.116 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:55.116 "is_configured": true, 00:23:55.116 "data_offset": 0, 00:23:55.116 "data_size": 65536 00:23:55.116 }, 00:23:55.116 { 00:23:55.116 "name": "BaseBdev3", 00:23:55.116 "uuid": "a708bc55-428f-11ef-a0af-c98d8ee52a94", 00:23:55.116 "is_configured": true, 00:23:55.116 "data_offset": 0, 00:23:55.116 "data_size": 65536 00:23:55.116 }, 00:23:55.116 { 00:23:55.116 "name": "BaseBdev4", 00:23:55.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.116 "is_configured": false, 00:23:55.116 "data_offset": 0, 00:23:55.116 "data_size": 0 00:23:55.116 } 00:23:55.116 ] 00:23:55.116 }' 00:23:55.116 09:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.116 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.374 09:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:55.632 [2024-07-15 09:50:23.582055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:55.632 [2024-07-15 09:50:23.582090] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34688e634a00 00:23:55.632 [2024-07-15 09:50:23.582094] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:55.632 [2024-07-15 09:50:23.582125] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34688e697e20 00:23:55.632 [2024-07-15 09:50:23.582223] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34688e634a00 00:23:55.632 [2024-07-15 09:50:23.582227] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34688e634a00 00:23:55.632 [2024-07-15 09:50:23.582264] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.632 BaseBdev4 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:55.632 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:55.892 09:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:56.151 [ 00:23:56.151 { 00:23:56.151 "name": "BaseBdev4", 00:23:56.151 "aliases": [ 00:23:56.151 "a7dd24dc-428f-11ef-a0af-c98d8ee52a94" 00:23:56.151 ], 00:23:56.151 "product_name": "Malloc disk", 00:23:56.151 "block_size": 512, 00:23:56.151 "num_blocks": 65536, 00:23:56.151 "uuid": "a7dd24dc-428f-11ef-a0af-c98d8ee52a94", 00:23:56.151 "assigned_rate_limits": { 00:23:56.151 "rw_ios_per_sec": 0, 00:23:56.151 "rw_mbytes_per_sec": 0, 00:23:56.151 "r_mbytes_per_sec": 0, 00:23:56.151 "w_mbytes_per_sec": 0 00:23:56.151 }, 00:23:56.151 "claimed": true, 00:23:56.151 "claim_type": "exclusive_write", 00:23:56.151 "zoned": false, 00:23:56.151 "supported_io_types": { 00:23:56.151 "read": true, 00:23:56.151 "write": true, 00:23:56.151 "unmap": true, 00:23:56.151 "flush": true, 00:23:56.151 "reset": true, 00:23:56.151 "nvme_admin": false, 00:23:56.151 "nvme_io": false, 00:23:56.151 "nvme_io_md": false, 00:23:56.151 "write_zeroes": true, 00:23:56.151 "zcopy": true, 00:23:56.151 "get_zone_info": false, 00:23:56.151 "zone_management": false, 00:23:56.151 "zone_append": false, 00:23:56.151 "compare": false, 00:23:56.151 "compare_and_write": false, 00:23:56.151 "abort": true, 00:23:56.151 "seek_hole": false, 00:23:56.151 "seek_data": false, 00:23:56.151 "copy": true, 00:23:56.151 "nvme_iov_md": false 00:23:56.151 }, 00:23:56.151 "memory_domains": [ 00:23:56.151 { 00:23:56.151 "dma_device_id": "system", 00:23:56.151 "dma_device_type": 1 00:23:56.151 }, 00:23:56.151 { 00:23:56.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.151 "dma_device_type": 2 00:23:56.151 } 00:23:56.151 ], 00:23:56.151 "driver_specific": {} 00:23:56.151 } 00:23:56.151 ] 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.151 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.410 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.410 "name": "Existed_Raid", 00:23:56.410 "uuid": "a7dd2c23-428f-11ef-a0af-c98d8ee52a94", 00:23:56.410 "strip_size_kb": 64, 00:23:56.410 "state": "online", 00:23:56.410 "raid_level": "raid0", 00:23:56.410 "superblock": false, 00:23:56.410 "num_base_bdevs": 4, 00:23:56.410 "num_base_bdevs_discovered": 4, 00:23:56.410 "num_base_bdevs_operational": 4, 00:23:56.410 "base_bdevs_list": [ 00:23:56.410 { 00:23:56.410 "name": "BaseBdev1", 00:23:56.411 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:56.411 "is_configured": true, 00:23:56.411 "data_offset": 0, 00:23:56.411 "data_size": 65536 00:23:56.411 }, 00:23:56.411 { 00:23:56.411 "name": "BaseBdev2", 00:23:56.411 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:56.411 "is_configured": true, 00:23:56.411 "data_offset": 0, 00:23:56.411 "data_size": 65536 00:23:56.411 }, 00:23:56.411 { 00:23:56.411 "name": "BaseBdev3", 00:23:56.411 "uuid": "a708bc55-428f-11ef-a0af-c98d8ee52a94", 00:23:56.411 "is_configured": true, 00:23:56.411 "data_offset": 0, 00:23:56.411 "data_size": 65536 00:23:56.411 }, 00:23:56.411 { 00:23:56.411 "name": "BaseBdev4", 00:23:56.411 "uuid": "a7dd24dc-428f-11ef-a0af-c98d8ee52a94", 00:23:56.411 "is_configured": true, 00:23:56.411 "data_offset": 0, 00:23:56.411 "data_size": 65536 00:23:56.411 } 00:23:56.411 ] 00:23:56.411 }' 00:23:56.411 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.411 09:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.678 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:56.678 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:56.679 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:56.679 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:56.679 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:56.679 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:56.679 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:56.679 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:56.938 [2024-07-15 09:50:24.841998] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.938 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:56.938 "name": "Existed_Raid", 00:23:56.938 "aliases": [ 00:23:56.938 "a7dd2c23-428f-11ef-a0af-c98d8ee52a94" 00:23:56.938 ], 00:23:56.938 "product_name": "Raid Volume", 00:23:56.938 "block_size": 512, 00:23:56.938 "num_blocks": 262144, 00:23:56.938 "uuid": "a7dd2c23-428f-11ef-a0af-c98d8ee52a94", 00:23:56.938 "assigned_rate_limits": { 00:23:56.938 "rw_ios_per_sec": 0, 00:23:56.938 "rw_mbytes_per_sec": 0, 00:23:56.938 "r_mbytes_per_sec": 0, 00:23:56.938 "w_mbytes_per_sec": 0 00:23:56.938 }, 00:23:56.938 "claimed": false, 00:23:56.938 "zoned": false, 00:23:56.938 "supported_io_types": { 00:23:56.938 "read": true, 00:23:56.938 "write": true, 00:23:56.938 "unmap": true, 00:23:56.938 "flush": true, 00:23:56.938 "reset": true, 00:23:56.938 "nvme_admin": false, 00:23:56.938 "nvme_io": false, 00:23:56.938 "nvme_io_md": false, 00:23:56.938 "write_zeroes": true, 00:23:56.938 "zcopy": false, 00:23:56.938 "get_zone_info": false, 00:23:56.938 "zone_management": false, 00:23:56.938 "zone_append": false, 00:23:56.938 "compare": false, 00:23:56.938 "compare_and_write": false, 00:23:56.938 "abort": false, 00:23:56.938 "seek_hole": false, 00:23:56.938 "seek_data": false, 00:23:56.938 "copy": false, 00:23:56.938 "nvme_iov_md": false 00:23:56.938 }, 00:23:56.938 "memory_domains": [ 00:23:56.938 { 00:23:56.938 "dma_device_id": "system", 00:23:56.938 "dma_device_type": 1 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.938 "dma_device_type": 2 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "system", 00:23:56.938 "dma_device_type": 1 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.938 "dma_device_type": 2 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "system", 00:23:56.938 "dma_device_type": 1 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.938 "dma_device_type": 2 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "system", 00:23:56.938 "dma_device_type": 1 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.938 "dma_device_type": 2 00:23:56.938 } 00:23:56.938 ], 00:23:56.938 "driver_specific": { 00:23:56.938 "raid": { 00:23:56.938 "uuid": "a7dd2c23-428f-11ef-a0af-c98d8ee52a94", 00:23:56.938 "strip_size_kb": 64, 00:23:56.938 "state": "online", 00:23:56.938 "raid_level": "raid0", 00:23:56.938 "superblock": false, 00:23:56.938 "num_base_bdevs": 4, 00:23:56.938 "num_base_bdevs_discovered": 4, 00:23:56.938 "num_base_bdevs_operational": 4, 00:23:56.938 "base_bdevs_list": [ 00:23:56.938 { 00:23:56.938 "name": "BaseBdev1", 00:23:56.938 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:56.938 "is_configured": true, 00:23:56.938 "data_offset": 0, 00:23:56.938 "data_size": 65536 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "name": "BaseBdev2", 00:23:56.938 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:56.938 "is_configured": true, 00:23:56.938 "data_offset": 0, 00:23:56.938 "data_size": 65536 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "name": "BaseBdev3", 00:23:56.938 "uuid": "a708bc55-428f-11ef-a0af-c98d8ee52a94", 00:23:56.938 "is_configured": true, 00:23:56.938 "data_offset": 0, 00:23:56.938 "data_size": 65536 00:23:56.938 }, 00:23:56.938 { 00:23:56.938 "name": "BaseBdev4", 00:23:56.938 "uuid": "a7dd24dc-428f-11ef-a0af-c98d8ee52a94", 00:23:56.938 "is_configured": true, 00:23:56.938 "data_offset": 0, 00:23:56.938 "data_size": 65536 00:23:56.938 } 00:23:56.938 ] 00:23:56.938 } 00:23:56.938 } 00:23:56.938 }' 00:23:56.938 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:56.938 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:56.938 BaseBdev2 00:23:56.938 BaseBdev3 00:23:56.938 BaseBdev4' 00:23:56.938 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:56.939 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:56.939 09:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.198 "name": "BaseBdev1", 00:23:57.198 "aliases": [ 00:23:57.198 "a4579ac4-428f-11ef-a0af-c98d8ee52a94" 00:23:57.198 ], 00:23:57.198 "product_name": "Malloc disk", 00:23:57.198 "block_size": 512, 00:23:57.198 "num_blocks": 65536, 00:23:57.198 "uuid": "a4579ac4-428f-11ef-a0af-c98d8ee52a94", 00:23:57.198 "assigned_rate_limits": { 00:23:57.198 "rw_ios_per_sec": 0, 00:23:57.198 "rw_mbytes_per_sec": 0, 00:23:57.198 "r_mbytes_per_sec": 0, 00:23:57.198 "w_mbytes_per_sec": 0 00:23:57.198 }, 00:23:57.198 "claimed": true, 00:23:57.198 "claim_type": "exclusive_write", 00:23:57.198 "zoned": false, 00:23:57.198 "supported_io_types": { 00:23:57.198 "read": true, 00:23:57.198 "write": true, 00:23:57.198 "unmap": true, 00:23:57.198 "flush": true, 00:23:57.198 "reset": true, 00:23:57.198 "nvme_admin": false, 00:23:57.198 "nvme_io": false, 00:23:57.198 "nvme_io_md": false, 00:23:57.198 "write_zeroes": true, 00:23:57.198 "zcopy": true, 00:23:57.198 "get_zone_info": false, 00:23:57.198 "zone_management": false, 00:23:57.198 "zone_append": false, 00:23:57.198 "compare": false, 00:23:57.198 "compare_and_write": false, 00:23:57.198 "abort": true, 00:23:57.198 "seek_hole": false, 00:23:57.198 "seek_data": false, 00:23:57.198 "copy": true, 00:23:57.198 "nvme_iov_md": false 00:23:57.198 }, 00:23:57.198 "memory_domains": [ 00:23:57.198 { 00:23:57.198 "dma_device_id": "system", 00:23:57.198 "dma_device_type": 1 00:23:57.198 }, 00:23:57.198 { 00:23:57.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.198 "dma_device_type": 2 00:23:57.198 } 00:23:57.198 ], 00:23:57.198 "driver_specific": {} 00:23:57.198 }' 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:57.198 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.457 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.457 "name": "BaseBdev2", 00:23:57.457 "aliases": [ 00:23:57.457 "a6007048-428f-11ef-a0af-c98d8ee52a94" 00:23:57.457 ], 00:23:57.457 "product_name": "Malloc disk", 00:23:57.457 "block_size": 512, 00:23:57.457 "num_blocks": 65536, 00:23:57.457 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:57.457 "assigned_rate_limits": { 00:23:57.457 "rw_ios_per_sec": 0, 00:23:57.457 "rw_mbytes_per_sec": 0, 00:23:57.457 "r_mbytes_per_sec": 0, 00:23:57.457 "w_mbytes_per_sec": 0 00:23:57.457 }, 00:23:57.457 "claimed": true, 00:23:57.457 "claim_type": "exclusive_write", 00:23:57.457 "zoned": false, 00:23:57.457 "supported_io_types": { 00:23:57.457 "read": true, 00:23:57.457 "write": true, 00:23:57.457 "unmap": true, 00:23:57.457 "flush": true, 00:23:57.457 "reset": true, 00:23:57.457 "nvme_admin": false, 00:23:57.457 "nvme_io": false, 00:23:57.457 "nvme_io_md": false, 00:23:57.457 "write_zeroes": true, 00:23:57.457 "zcopy": true, 00:23:57.457 "get_zone_info": false, 00:23:57.457 "zone_management": false, 00:23:57.457 "zone_append": false, 00:23:57.457 "compare": false, 00:23:57.457 "compare_and_write": false, 00:23:57.457 "abort": true, 00:23:57.457 "seek_hole": false, 00:23:57.457 "seek_data": false, 00:23:57.457 "copy": true, 00:23:57.458 "nvme_iov_md": false 00:23:57.458 }, 00:23:57.458 "memory_domains": [ 00:23:57.458 { 00:23:57.458 "dma_device_id": "system", 00:23:57.458 "dma_device_type": 1 00:23:57.458 }, 00:23:57.458 { 00:23:57.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.458 "dma_device_type": 2 00:23:57.458 } 00:23:57.458 ], 00:23:57.458 "driver_specific": {} 00:23:57.458 }' 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:57.458 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.716 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.716 "name": "BaseBdev3", 00:23:57.716 "aliases": [ 00:23:57.716 "a708bc55-428f-11ef-a0af-c98d8ee52a94" 00:23:57.716 ], 00:23:57.716 "product_name": "Malloc disk", 00:23:57.716 "block_size": 512, 00:23:57.716 "num_blocks": 65536, 00:23:57.716 "uuid": "a708bc55-428f-11ef-a0af-c98d8ee52a94", 00:23:57.716 "assigned_rate_limits": { 00:23:57.716 "rw_ios_per_sec": 0, 00:23:57.716 "rw_mbytes_per_sec": 0, 00:23:57.716 "r_mbytes_per_sec": 0, 00:23:57.716 "w_mbytes_per_sec": 0 00:23:57.716 }, 00:23:57.716 "claimed": true, 00:23:57.716 "claim_type": "exclusive_write", 00:23:57.716 "zoned": false, 00:23:57.716 "supported_io_types": { 00:23:57.716 "read": true, 00:23:57.716 "write": true, 00:23:57.716 "unmap": true, 00:23:57.717 "flush": true, 00:23:57.717 "reset": true, 00:23:57.717 "nvme_admin": false, 00:23:57.717 "nvme_io": false, 00:23:57.717 "nvme_io_md": false, 00:23:57.717 "write_zeroes": true, 00:23:57.717 "zcopy": true, 00:23:57.717 "get_zone_info": false, 00:23:57.717 "zone_management": false, 00:23:57.717 "zone_append": false, 00:23:57.717 "compare": false, 00:23:57.717 "compare_and_write": false, 00:23:57.717 "abort": true, 00:23:57.717 "seek_hole": false, 00:23:57.717 "seek_data": false, 00:23:57.717 "copy": true, 00:23:57.717 "nvme_iov_md": false 00:23:57.717 }, 00:23:57.717 "memory_domains": [ 00:23:57.717 { 00:23:57.717 "dma_device_id": "system", 00:23:57.717 "dma_device_type": 1 00:23:57.717 }, 00:23:57.717 { 00:23:57.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.717 "dma_device_type": 2 00:23:57.717 } 00:23:57.717 ], 00:23:57.717 "driver_specific": {} 00:23:57.717 }' 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.717 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.976 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.976 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.976 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:57.976 09:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.976 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.976 "name": "BaseBdev4", 00:23:57.976 "aliases": [ 00:23:57.976 "a7dd24dc-428f-11ef-a0af-c98d8ee52a94" 00:23:57.976 ], 00:23:57.976 "product_name": "Malloc disk", 00:23:57.976 "block_size": 512, 00:23:57.976 "num_blocks": 65536, 00:23:57.976 "uuid": "a7dd24dc-428f-11ef-a0af-c98d8ee52a94", 00:23:57.976 "assigned_rate_limits": { 00:23:57.976 "rw_ios_per_sec": 0, 00:23:57.976 "rw_mbytes_per_sec": 0, 00:23:57.976 "r_mbytes_per_sec": 0, 00:23:57.976 "w_mbytes_per_sec": 0 00:23:57.976 }, 00:23:57.976 "claimed": true, 00:23:57.976 "claim_type": "exclusive_write", 00:23:57.976 "zoned": false, 00:23:57.977 "supported_io_types": { 00:23:57.977 "read": true, 00:23:57.977 "write": true, 00:23:57.977 "unmap": true, 00:23:57.977 "flush": true, 00:23:57.977 "reset": true, 00:23:57.977 "nvme_admin": false, 00:23:57.977 "nvme_io": false, 00:23:57.977 "nvme_io_md": false, 00:23:57.977 "write_zeroes": true, 00:23:57.977 "zcopy": true, 00:23:57.977 "get_zone_info": false, 00:23:57.977 "zone_management": false, 00:23:57.977 "zone_append": false, 00:23:57.977 "compare": false, 00:23:57.977 "compare_and_write": false, 00:23:57.977 "abort": true, 00:23:57.977 "seek_hole": false, 00:23:57.977 "seek_data": false, 00:23:57.977 "copy": true, 00:23:57.977 "nvme_iov_md": false 00:23:57.977 }, 00:23:57.977 "memory_domains": [ 00:23:57.977 { 00:23:57.977 "dma_device_id": "system", 00:23:57.977 "dma_device_type": 1 00:23:57.977 }, 00:23:57.977 { 00:23:57.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.977 "dma_device_type": 2 00:23:57.977 } 00:23:57.977 ], 00:23:57.977 "driver_specific": {} 00:23:57.977 }' 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.977 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.236 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.236 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.236 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:58.496 [2024-07-15 09:50:26.370075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.496 [2024-07-15 09:50:26.370102] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:58.496 [2024-07-15 09:50:26.370115] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.496 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.755 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.755 "name": "Existed_Raid", 00:23:58.755 "uuid": "a7dd2c23-428f-11ef-a0af-c98d8ee52a94", 00:23:58.755 "strip_size_kb": 64, 00:23:58.755 "state": "offline", 00:23:58.755 "raid_level": "raid0", 00:23:58.755 "superblock": false, 00:23:58.755 "num_base_bdevs": 4, 00:23:58.755 "num_base_bdevs_discovered": 3, 00:23:58.755 "num_base_bdevs_operational": 3, 00:23:58.755 "base_bdevs_list": [ 00:23:58.755 { 00:23:58.755 "name": null, 00:23:58.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.755 "is_configured": false, 00:23:58.755 "data_offset": 0, 00:23:58.755 "data_size": 65536 00:23:58.755 }, 00:23:58.755 { 00:23:58.755 "name": "BaseBdev2", 00:23:58.755 "uuid": "a6007048-428f-11ef-a0af-c98d8ee52a94", 00:23:58.755 "is_configured": true, 00:23:58.755 "data_offset": 0, 00:23:58.755 "data_size": 65536 00:23:58.755 }, 00:23:58.755 { 00:23:58.755 "name": "BaseBdev3", 00:23:58.755 "uuid": "a708bc55-428f-11ef-a0af-c98d8ee52a94", 00:23:58.755 "is_configured": true, 00:23:58.755 "data_offset": 0, 00:23:58.755 "data_size": 65536 00:23:58.756 }, 00:23:58.756 { 00:23:58.756 "name": "BaseBdev4", 00:23:58.756 "uuid": "a7dd24dc-428f-11ef-a0af-c98d8ee52a94", 00:23:58.756 "is_configured": true, 00:23:58.756 "data_offset": 0, 00:23:58.756 "data_size": 65536 00:23:58.756 } 00:23:58.756 ] 00:23:58.756 }' 00:23:58.756 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.756 09:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.034 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:59.034 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:59.034 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.034 09:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:59.034 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:59.034 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:59.034 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:59.311 [2024-07-15 09:50:27.358570] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:59.311 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:59.311 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:59.311 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.311 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:59.570 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:59.570 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:59.570 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:59.828 [2024-07-15 09:50:27.835205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:59.828 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:59.828 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:59.829 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.829 09:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:00.088 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:00.088 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:00.088 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:00.347 [2024-07-15 09:50:28.267726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:00.347 [2024-07-15 09:50:28.267757] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34688e634a00 name Existed_Raid, state offline 00:24:00.347 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:00.347 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:00.347 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.347 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:00.606 BaseBdev2 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:00.606 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:00.866 09:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:01.125 [ 00:24:01.125 { 00:24:01.125 "name": "BaseBdev2", 00:24:01.125 "aliases": [ 00:24:01.125 "aae3fe81-428f-11ef-a0af-c98d8ee52a94" 00:24:01.125 ], 00:24:01.125 "product_name": "Malloc disk", 00:24:01.125 "block_size": 512, 00:24:01.125 "num_blocks": 65536, 00:24:01.125 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:01.125 "assigned_rate_limits": { 00:24:01.125 "rw_ios_per_sec": 0, 00:24:01.125 "rw_mbytes_per_sec": 0, 00:24:01.125 "r_mbytes_per_sec": 0, 00:24:01.125 "w_mbytes_per_sec": 0 00:24:01.125 }, 00:24:01.125 "claimed": false, 00:24:01.125 "zoned": false, 00:24:01.125 "supported_io_types": { 00:24:01.125 "read": true, 00:24:01.125 "write": true, 00:24:01.125 "unmap": true, 00:24:01.125 "flush": true, 00:24:01.125 "reset": true, 00:24:01.125 "nvme_admin": false, 00:24:01.125 "nvme_io": false, 00:24:01.125 "nvme_io_md": false, 00:24:01.125 "write_zeroes": true, 00:24:01.125 "zcopy": true, 00:24:01.125 "get_zone_info": false, 00:24:01.125 "zone_management": false, 00:24:01.125 "zone_append": false, 00:24:01.125 "compare": false, 00:24:01.125 "compare_and_write": false, 00:24:01.125 "abort": true, 00:24:01.125 "seek_hole": false, 00:24:01.125 "seek_data": false, 00:24:01.125 "copy": true, 00:24:01.125 "nvme_iov_md": false 00:24:01.125 }, 00:24:01.125 "memory_domains": [ 00:24:01.125 { 00:24:01.125 "dma_device_id": "system", 00:24:01.125 "dma_device_type": 1 00:24:01.125 }, 00:24:01.125 { 00:24:01.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.125 "dma_device_type": 2 00:24:01.125 } 00:24:01.125 ], 00:24:01.125 "driver_specific": {} 00:24:01.125 } 00:24:01.125 ] 00:24:01.125 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:01.125 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:01.125 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:01.125 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:01.385 BaseBdev3 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:01.385 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:01.642 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:01.899 [ 00:24:01.899 { 00:24:01.899 "name": "BaseBdev3", 00:24:01.899 "aliases": [ 00:24:01.899 "ab40c66d-428f-11ef-a0af-c98d8ee52a94" 00:24:01.899 ], 00:24:01.899 "product_name": "Malloc disk", 00:24:01.899 "block_size": 512, 00:24:01.899 "num_blocks": 65536, 00:24:01.899 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:01.899 "assigned_rate_limits": { 00:24:01.899 "rw_ios_per_sec": 0, 00:24:01.899 "rw_mbytes_per_sec": 0, 00:24:01.899 "r_mbytes_per_sec": 0, 00:24:01.899 "w_mbytes_per_sec": 0 00:24:01.899 }, 00:24:01.899 "claimed": false, 00:24:01.899 "zoned": false, 00:24:01.899 "supported_io_types": { 00:24:01.899 "read": true, 00:24:01.899 "write": true, 00:24:01.899 "unmap": true, 00:24:01.899 "flush": true, 00:24:01.899 "reset": true, 00:24:01.899 "nvme_admin": false, 00:24:01.899 "nvme_io": false, 00:24:01.899 "nvme_io_md": false, 00:24:01.899 "write_zeroes": true, 00:24:01.899 "zcopy": true, 00:24:01.899 "get_zone_info": false, 00:24:01.899 "zone_management": false, 00:24:01.899 "zone_append": false, 00:24:01.899 "compare": false, 00:24:01.899 "compare_and_write": false, 00:24:01.899 "abort": true, 00:24:01.899 "seek_hole": false, 00:24:01.899 "seek_data": false, 00:24:01.899 "copy": true, 00:24:01.899 "nvme_iov_md": false 00:24:01.899 }, 00:24:01.899 "memory_domains": [ 00:24:01.899 { 00:24:01.899 "dma_device_id": "system", 00:24:01.899 "dma_device_type": 1 00:24:01.899 }, 00:24:01.899 { 00:24:01.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.899 "dma_device_type": 2 00:24:01.899 } 00:24:01.899 ], 00:24:01.899 "driver_specific": {} 00:24:01.899 } 00:24:01.899 ] 00:24:01.899 09:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:01.899 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:01.899 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:01.899 09:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:02.194 BaseBdev4 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:02.194 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.454 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:02.712 [ 00:24:02.712 { 00:24:02.712 "name": "BaseBdev4", 00:24:02.712 "aliases": [ 00:24:02.712 "abc847f9-428f-11ef-a0af-c98d8ee52a94" 00:24:02.712 ], 00:24:02.712 "product_name": "Malloc disk", 00:24:02.712 "block_size": 512, 00:24:02.712 "num_blocks": 65536, 00:24:02.712 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:02.712 "assigned_rate_limits": { 00:24:02.712 "rw_ios_per_sec": 0, 00:24:02.712 "rw_mbytes_per_sec": 0, 00:24:02.712 "r_mbytes_per_sec": 0, 00:24:02.712 "w_mbytes_per_sec": 0 00:24:02.712 }, 00:24:02.712 "claimed": false, 00:24:02.712 "zoned": false, 00:24:02.712 "supported_io_types": { 00:24:02.713 "read": true, 00:24:02.713 "write": true, 00:24:02.713 "unmap": true, 00:24:02.713 "flush": true, 00:24:02.713 "reset": true, 00:24:02.713 "nvme_admin": false, 00:24:02.713 "nvme_io": false, 00:24:02.713 "nvme_io_md": false, 00:24:02.713 "write_zeroes": true, 00:24:02.713 "zcopy": true, 00:24:02.713 "get_zone_info": false, 00:24:02.713 "zone_management": false, 00:24:02.713 "zone_append": false, 00:24:02.713 "compare": false, 00:24:02.713 "compare_and_write": false, 00:24:02.713 "abort": true, 00:24:02.713 "seek_hole": false, 00:24:02.713 "seek_data": false, 00:24:02.713 "copy": true, 00:24:02.713 "nvme_iov_md": false 00:24:02.713 }, 00:24:02.713 "memory_domains": [ 00:24:02.713 { 00:24:02.713 "dma_device_id": "system", 00:24:02.713 "dma_device_type": 1 00:24:02.713 }, 00:24:02.713 { 00:24:02.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.713 "dma_device_type": 2 00:24:02.713 } 00:24:02.713 ], 00:24:02.713 "driver_specific": {} 00:24:02.713 } 00:24:02.713 ] 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:02.713 [2024-07-15 09:50:30.752111] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:02.713 [2024-07-15 09:50:30.752190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:02.713 [2024-07-15 09:50:30.752198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:02.713 [2024-07-15 09:50:30.752842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:02.713 [2024-07-15 09:50:30.752862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.713 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.972 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.972 "name": "Existed_Raid", 00:24:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.972 "strip_size_kb": 64, 00:24:02.972 "state": "configuring", 00:24:02.972 "raid_level": "raid0", 00:24:02.972 "superblock": false, 00:24:02.972 "num_base_bdevs": 4, 00:24:02.972 "num_base_bdevs_discovered": 3, 00:24:02.972 "num_base_bdevs_operational": 4, 00:24:02.972 "base_bdevs_list": [ 00:24:02.972 { 00:24:02.972 "name": "BaseBdev1", 00:24:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.972 "is_configured": false, 00:24:02.972 "data_offset": 0, 00:24:02.972 "data_size": 0 00:24:02.972 }, 00:24:02.972 { 00:24:02.972 "name": "BaseBdev2", 00:24:02.972 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:02.972 "is_configured": true, 00:24:02.972 "data_offset": 0, 00:24:02.972 "data_size": 65536 00:24:02.972 }, 00:24:02.972 { 00:24:02.972 "name": "BaseBdev3", 00:24:02.972 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:02.972 "is_configured": true, 00:24:02.972 "data_offset": 0, 00:24:02.972 "data_size": 65536 00:24:02.972 }, 00:24:02.972 { 00:24:02.972 "name": "BaseBdev4", 00:24:02.972 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:02.972 "is_configured": true, 00:24:02.972 "data_offset": 0, 00:24:02.972 "data_size": 65536 00:24:02.972 } 00:24:02.972 ] 00:24:02.972 }' 00:24:02.972 09:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.972 09:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.230 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:03.489 [2024-07-15 09:50:31.488161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.489 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.748 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.748 "name": "Existed_Raid", 00:24:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.748 "strip_size_kb": 64, 00:24:03.748 "state": "configuring", 00:24:03.748 "raid_level": "raid0", 00:24:03.748 "superblock": false, 00:24:03.748 "num_base_bdevs": 4, 00:24:03.748 "num_base_bdevs_discovered": 2, 00:24:03.748 "num_base_bdevs_operational": 4, 00:24:03.748 "base_bdevs_list": [ 00:24:03.748 { 00:24:03.748 "name": "BaseBdev1", 00:24:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.748 "is_configured": false, 00:24:03.748 "data_offset": 0, 00:24:03.748 "data_size": 0 00:24:03.748 }, 00:24:03.748 { 00:24:03.748 "name": null, 00:24:03.748 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:03.748 "is_configured": false, 00:24:03.748 "data_offset": 0, 00:24:03.748 "data_size": 65536 00:24:03.748 }, 00:24:03.748 { 00:24:03.748 "name": "BaseBdev3", 00:24:03.748 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:03.748 "is_configured": true, 00:24:03.748 "data_offset": 0, 00:24:03.748 "data_size": 65536 00:24:03.748 }, 00:24:03.748 { 00:24:03.748 "name": "BaseBdev4", 00:24:03.748 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:03.748 "is_configured": true, 00:24:03.748 "data_offset": 0, 00:24:03.748 "data_size": 65536 00:24:03.748 } 00:24:03.748 ] 00:24:03.748 }' 00:24:03.748 09:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.748 09:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.007 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.007 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:04.265 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:04.265 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:04.524 [2024-07-15 09:50:32.544335] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:04.524 BaseBdev1 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:04.524 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:04.804 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:05.063 [ 00:24:05.063 { 00:24:05.063 "name": "BaseBdev1", 00:24:05.063 "aliases": [ 00:24:05.063 "ad34ae33-428f-11ef-a0af-c98d8ee52a94" 00:24:05.063 ], 00:24:05.063 "product_name": "Malloc disk", 00:24:05.063 "block_size": 512, 00:24:05.063 "num_blocks": 65536, 00:24:05.063 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:05.063 "assigned_rate_limits": { 00:24:05.063 "rw_ios_per_sec": 0, 00:24:05.063 "rw_mbytes_per_sec": 0, 00:24:05.063 "r_mbytes_per_sec": 0, 00:24:05.063 "w_mbytes_per_sec": 0 00:24:05.063 }, 00:24:05.063 "claimed": true, 00:24:05.063 "claim_type": "exclusive_write", 00:24:05.063 "zoned": false, 00:24:05.063 "supported_io_types": { 00:24:05.063 "read": true, 00:24:05.063 "write": true, 00:24:05.063 "unmap": true, 00:24:05.063 "flush": true, 00:24:05.063 "reset": true, 00:24:05.063 "nvme_admin": false, 00:24:05.063 "nvme_io": false, 00:24:05.063 "nvme_io_md": false, 00:24:05.063 "write_zeroes": true, 00:24:05.063 "zcopy": true, 00:24:05.063 "get_zone_info": false, 00:24:05.063 "zone_management": false, 00:24:05.063 "zone_append": false, 00:24:05.063 "compare": false, 00:24:05.063 "compare_and_write": false, 00:24:05.063 "abort": true, 00:24:05.063 "seek_hole": false, 00:24:05.063 "seek_data": false, 00:24:05.063 "copy": true, 00:24:05.063 "nvme_iov_md": false 00:24:05.063 }, 00:24:05.063 "memory_domains": [ 00:24:05.063 { 00:24:05.063 "dma_device_id": "system", 00:24:05.063 "dma_device_type": 1 00:24:05.063 }, 00:24:05.063 { 00:24:05.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.063 "dma_device_type": 2 00:24:05.063 } 00:24:05.063 ], 00:24:05.063 "driver_specific": {} 00:24:05.063 } 00:24:05.063 ] 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.063 09:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.328 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.328 "name": "Existed_Raid", 00:24:05.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.328 "strip_size_kb": 64, 00:24:05.328 "state": "configuring", 00:24:05.328 "raid_level": "raid0", 00:24:05.328 "superblock": false, 00:24:05.328 "num_base_bdevs": 4, 00:24:05.328 "num_base_bdevs_discovered": 3, 00:24:05.328 "num_base_bdevs_operational": 4, 00:24:05.328 "base_bdevs_list": [ 00:24:05.328 { 00:24:05.328 "name": "BaseBdev1", 00:24:05.328 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:05.328 "is_configured": true, 00:24:05.328 "data_offset": 0, 00:24:05.328 "data_size": 65536 00:24:05.328 }, 00:24:05.328 { 00:24:05.328 "name": null, 00:24:05.328 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:05.328 "is_configured": false, 00:24:05.328 "data_offset": 0, 00:24:05.328 "data_size": 65536 00:24:05.328 }, 00:24:05.328 { 00:24:05.328 "name": "BaseBdev3", 00:24:05.328 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:05.328 "is_configured": true, 00:24:05.328 "data_offset": 0, 00:24:05.328 "data_size": 65536 00:24:05.328 }, 00:24:05.328 { 00:24:05.328 "name": "BaseBdev4", 00:24:05.328 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:05.328 "is_configured": true, 00:24:05.328 "data_offset": 0, 00:24:05.328 "data_size": 65536 00:24:05.328 } 00:24:05.328 ] 00:24:05.328 }' 00:24:05.328 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.328 09:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.587 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.587 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:05.846 [2024-07-15 09:50:33.924289] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.846 09:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.105 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.105 "name": "Existed_Raid", 00:24:06.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.105 "strip_size_kb": 64, 00:24:06.105 "state": "configuring", 00:24:06.105 "raid_level": "raid0", 00:24:06.105 "superblock": false, 00:24:06.105 "num_base_bdevs": 4, 00:24:06.105 "num_base_bdevs_discovered": 2, 00:24:06.105 "num_base_bdevs_operational": 4, 00:24:06.105 "base_bdevs_list": [ 00:24:06.105 { 00:24:06.105 "name": "BaseBdev1", 00:24:06.105 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:06.105 "is_configured": true, 00:24:06.105 "data_offset": 0, 00:24:06.105 "data_size": 65536 00:24:06.105 }, 00:24:06.105 { 00:24:06.105 "name": null, 00:24:06.105 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:06.105 "is_configured": false, 00:24:06.105 "data_offset": 0, 00:24:06.105 "data_size": 65536 00:24:06.105 }, 00:24:06.105 { 00:24:06.105 "name": null, 00:24:06.105 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:06.105 "is_configured": false, 00:24:06.105 "data_offset": 0, 00:24:06.105 "data_size": 65536 00:24:06.105 }, 00:24:06.105 { 00:24:06.105 "name": "BaseBdev4", 00:24:06.105 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:06.105 "is_configured": true, 00:24:06.105 "data_offset": 0, 00:24:06.105 "data_size": 65536 00:24:06.105 } 00:24:06.105 ] 00:24:06.105 }' 00:24:06.105 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.105 09:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.364 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.364 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:06.622 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:06.622 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:06.881 [2024-07-15 09:50:34.804374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.881 09:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.139 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.139 "name": "Existed_Raid", 00:24:07.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.139 "strip_size_kb": 64, 00:24:07.139 "state": "configuring", 00:24:07.139 "raid_level": "raid0", 00:24:07.139 "superblock": false, 00:24:07.139 "num_base_bdevs": 4, 00:24:07.139 "num_base_bdevs_discovered": 3, 00:24:07.139 "num_base_bdevs_operational": 4, 00:24:07.139 "base_bdevs_list": [ 00:24:07.139 { 00:24:07.139 "name": "BaseBdev1", 00:24:07.139 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:07.139 "is_configured": true, 00:24:07.139 "data_offset": 0, 00:24:07.139 "data_size": 65536 00:24:07.139 }, 00:24:07.139 { 00:24:07.139 "name": null, 00:24:07.139 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:07.139 "is_configured": false, 00:24:07.139 "data_offset": 0, 00:24:07.139 "data_size": 65536 00:24:07.139 }, 00:24:07.139 { 00:24:07.139 "name": "BaseBdev3", 00:24:07.139 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:07.139 "is_configured": true, 00:24:07.139 "data_offset": 0, 00:24:07.139 "data_size": 65536 00:24:07.139 }, 00:24:07.139 { 00:24:07.139 "name": "BaseBdev4", 00:24:07.139 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:07.139 "is_configured": true, 00:24:07.139 "data_offset": 0, 00:24:07.139 "data_size": 65536 00:24:07.139 } 00:24:07.139 ] 00:24:07.139 }' 00:24:07.139 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.139 09:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.399 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.399 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:07.657 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:07.658 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:07.658 [2024-07-15 09:50:35.740439] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.916 "name": "Existed_Raid", 00:24:07.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.916 "strip_size_kb": 64, 00:24:07.916 "state": "configuring", 00:24:07.916 "raid_level": "raid0", 00:24:07.916 "superblock": false, 00:24:07.916 "num_base_bdevs": 4, 00:24:07.916 "num_base_bdevs_discovered": 2, 00:24:07.916 "num_base_bdevs_operational": 4, 00:24:07.916 "base_bdevs_list": [ 00:24:07.916 { 00:24:07.916 "name": null, 00:24:07.916 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:07.916 "is_configured": false, 00:24:07.916 "data_offset": 0, 00:24:07.916 "data_size": 65536 00:24:07.916 }, 00:24:07.916 { 00:24:07.916 "name": null, 00:24:07.916 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:07.916 "is_configured": false, 00:24:07.916 "data_offset": 0, 00:24:07.916 "data_size": 65536 00:24:07.916 }, 00:24:07.916 { 00:24:07.916 "name": "BaseBdev3", 00:24:07.916 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:07.916 "is_configured": true, 00:24:07.916 "data_offset": 0, 00:24:07.916 "data_size": 65536 00:24:07.916 }, 00:24:07.916 { 00:24:07.916 "name": "BaseBdev4", 00:24:07.916 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:07.916 "is_configured": true, 00:24:07.916 "data_offset": 0, 00:24:07.916 "data_size": 65536 00:24:07.916 } 00:24:07.916 ] 00:24:07.916 }' 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.916 09:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.495 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.495 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:08.495 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:08.784 [2024-07-15 09:50:36.792678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.784 09:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.044 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.044 "name": "Existed_Raid", 00:24:09.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.044 "strip_size_kb": 64, 00:24:09.044 "state": "configuring", 00:24:09.044 "raid_level": "raid0", 00:24:09.044 "superblock": false, 00:24:09.044 "num_base_bdevs": 4, 00:24:09.044 "num_base_bdevs_discovered": 3, 00:24:09.044 "num_base_bdevs_operational": 4, 00:24:09.044 "base_bdevs_list": [ 00:24:09.044 { 00:24:09.044 "name": null, 00:24:09.044 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:09.044 "is_configured": false, 00:24:09.044 "data_offset": 0, 00:24:09.044 "data_size": 65536 00:24:09.044 }, 00:24:09.044 { 00:24:09.044 "name": "BaseBdev2", 00:24:09.044 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:09.044 "is_configured": true, 00:24:09.044 "data_offset": 0, 00:24:09.044 "data_size": 65536 00:24:09.044 }, 00:24:09.044 { 00:24:09.044 "name": "BaseBdev3", 00:24:09.044 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:09.044 "is_configured": true, 00:24:09.044 "data_offset": 0, 00:24:09.044 "data_size": 65536 00:24:09.044 }, 00:24:09.044 { 00:24:09.044 "name": "BaseBdev4", 00:24:09.044 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:09.044 "is_configured": true, 00:24:09.044 "data_offset": 0, 00:24:09.044 "data_size": 65536 00:24:09.044 } 00:24:09.044 ] 00:24:09.044 }' 00:24:09.044 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.044 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.303 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.303 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:09.563 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:09.563 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.563 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:09.823 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ad34ae33-428f-11ef-a0af-c98d8ee52a94 00:24:10.082 [2024-07-15 09:50:37.964904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:10.082 [2024-07-15 09:50:37.964935] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34688e634f00 00:24:10.082 [2024-07-15 09:50:37.964939] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:10.082 [2024-07-15 09:50:37.964960] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34688e697e20 00:24:10.082 [2024-07-15 09:50:37.965031] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34688e634f00 00:24:10.082 [2024-07-15 09:50:37.965034] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34688e634f00 00:24:10.082 [2024-07-15 09:50:37.965064] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.082 NewBaseBdev 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:10.082 09:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:10.341 09:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:10.341 [ 00:24:10.341 { 00:24:10.341 "name": "NewBaseBdev", 00:24:10.341 "aliases": [ 00:24:10.341 "ad34ae33-428f-11ef-a0af-c98d8ee52a94" 00:24:10.341 ], 00:24:10.341 "product_name": "Malloc disk", 00:24:10.341 "block_size": 512, 00:24:10.341 "num_blocks": 65536, 00:24:10.341 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:10.341 "assigned_rate_limits": { 00:24:10.341 "rw_ios_per_sec": 0, 00:24:10.341 "rw_mbytes_per_sec": 0, 00:24:10.341 "r_mbytes_per_sec": 0, 00:24:10.341 "w_mbytes_per_sec": 0 00:24:10.341 }, 00:24:10.341 "claimed": true, 00:24:10.341 "claim_type": "exclusive_write", 00:24:10.341 "zoned": false, 00:24:10.341 "supported_io_types": { 00:24:10.341 "read": true, 00:24:10.341 "write": true, 00:24:10.341 "unmap": true, 00:24:10.341 "flush": true, 00:24:10.341 "reset": true, 00:24:10.341 "nvme_admin": false, 00:24:10.341 "nvme_io": false, 00:24:10.341 "nvme_io_md": false, 00:24:10.341 "write_zeroes": true, 00:24:10.341 "zcopy": true, 00:24:10.341 "get_zone_info": false, 00:24:10.341 "zone_management": false, 00:24:10.341 "zone_append": false, 00:24:10.341 "compare": false, 00:24:10.341 "compare_and_write": false, 00:24:10.341 "abort": true, 00:24:10.341 "seek_hole": false, 00:24:10.341 "seek_data": false, 00:24:10.341 "copy": true, 00:24:10.341 "nvme_iov_md": false 00:24:10.341 }, 00:24:10.341 "memory_domains": [ 00:24:10.341 { 00:24:10.342 "dma_device_id": "system", 00:24:10.342 "dma_device_type": 1 00:24:10.342 }, 00:24:10.342 { 00:24:10.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.342 "dma_device_type": 2 00:24:10.342 } 00:24:10.342 ], 00:24:10.342 "driver_specific": {} 00:24:10.342 } 00:24:10.342 ] 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.342 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:10.600 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.600 "name": "Existed_Raid", 00:24:10.600 "uuid": "b06fd1ab-428f-11ef-a0af-c98d8ee52a94", 00:24:10.600 "strip_size_kb": 64, 00:24:10.600 "state": "online", 00:24:10.600 "raid_level": "raid0", 00:24:10.600 "superblock": false, 00:24:10.600 "num_base_bdevs": 4, 00:24:10.600 "num_base_bdevs_discovered": 4, 00:24:10.600 "num_base_bdevs_operational": 4, 00:24:10.600 "base_bdevs_list": [ 00:24:10.600 { 00:24:10.600 "name": "NewBaseBdev", 00:24:10.600 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:10.600 "is_configured": true, 00:24:10.600 "data_offset": 0, 00:24:10.600 "data_size": 65536 00:24:10.600 }, 00:24:10.600 { 00:24:10.600 "name": "BaseBdev2", 00:24:10.600 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:10.600 "is_configured": true, 00:24:10.600 "data_offset": 0, 00:24:10.600 "data_size": 65536 00:24:10.600 }, 00:24:10.600 { 00:24:10.600 "name": "BaseBdev3", 00:24:10.600 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:10.600 "is_configured": true, 00:24:10.600 "data_offset": 0, 00:24:10.600 "data_size": 65536 00:24:10.600 }, 00:24:10.600 { 00:24:10.600 "name": "BaseBdev4", 00:24:10.600 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:10.600 "is_configured": true, 00:24:10.600 "data_offset": 0, 00:24:10.600 "data_size": 65536 00:24:10.600 } 00:24:10.600 ] 00:24:10.600 }' 00:24:10.600 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.600 09:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:10.858 09:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:11.116 [2024-07-15 09:50:39.080820] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:11.116 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:11.116 "name": "Existed_Raid", 00:24:11.116 "aliases": [ 00:24:11.116 "b06fd1ab-428f-11ef-a0af-c98d8ee52a94" 00:24:11.116 ], 00:24:11.116 "product_name": "Raid Volume", 00:24:11.116 "block_size": 512, 00:24:11.116 "num_blocks": 262144, 00:24:11.116 "uuid": "b06fd1ab-428f-11ef-a0af-c98d8ee52a94", 00:24:11.116 "assigned_rate_limits": { 00:24:11.116 "rw_ios_per_sec": 0, 00:24:11.116 "rw_mbytes_per_sec": 0, 00:24:11.116 "r_mbytes_per_sec": 0, 00:24:11.116 "w_mbytes_per_sec": 0 00:24:11.116 }, 00:24:11.116 "claimed": false, 00:24:11.116 "zoned": false, 00:24:11.116 "supported_io_types": { 00:24:11.116 "read": true, 00:24:11.116 "write": true, 00:24:11.116 "unmap": true, 00:24:11.116 "flush": true, 00:24:11.116 "reset": true, 00:24:11.116 "nvme_admin": false, 00:24:11.116 "nvme_io": false, 00:24:11.116 "nvme_io_md": false, 00:24:11.116 "write_zeroes": true, 00:24:11.116 "zcopy": false, 00:24:11.116 "get_zone_info": false, 00:24:11.116 "zone_management": false, 00:24:11.116 "zone_append": false, 00:24:11.116 "compare": false, 00:24:11.116 "compare_and_write": false, 00:24:11.116 "abort": false, 00:24:11.116 "seek_hole": false, 00:24:11.116 "seek_data": false, 00:24:11.116 "copy": false, 00:24:11.116 "nvme_iov_md": false 00:24:11.116 }, 00:24:11.116 "memory_domains": [ 00:24:11.116 { 00:24:11.116 "dma_device_id": "system", 00:24:11.116 "dma_device_type": 1 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.116 "dma_device_type": 2 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "system", 00:24:11.116 "dma_device_type": 1 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.116 "dma_device_type": 2 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "system", 00:24:11.116 "dma_device_type": 1 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.116 "dma_device_type": 2 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "system", 00:24:11.116 "dma_device_type": 1 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.116 "dma_device_type": 2 00:24:11.116 } 00:24:11.116 ], 00:24:11.116 "driver_specific": { 00:24:11.116 "raid": { 00:24:11.116 "uuid": "b06fd1ab-428f-11ef-a0af-c98d8ee52a94", 00:24:11.116 "strip_size_kb": 64, 00:24:11.116 "state": "online", 00:24:11.116 "raid_level": "raid0", 00:24:11.116 "superblock": false, 00:24:11.116 "num_base_bdevs": 4, 00:24:11.116 "num_base_bdevs_discovered": 4, 00:24:11.116 "num_base_bdevs_operational": 4, 00:24:11.116 "base_bdevs_list": [ 00:24:11.116 { 00:24:11.116 "name": "NewBaseBdev", 00:24:11.116 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:11.116 "is_configured": true, 00:24:11.116 "data_offset": 0, 00:24:11.116 "data_size": 65536 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "name": "BaseBdev2", 00:24:11.116 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:11.116 "is_configured": true, 00:24:11.116 "data_offset": 0, 00:24:11.116 "data_size": 65536 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "name": "BaseBdev3", 00:24:11.116 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:11.116 "is_configured": true, 00:24:11.116 "data_offset": 0, 00:24:11.116 "data_size": 65536 00:24:11.116 }, 00:24:11.116 { 00:24:11.116 "name": "BaseBdev4", 00:24:11.116 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:11.116 "is_configured": true, 00:24:11.116 "data_offset": 0, 00:24:11.116 "data_size": 65536 00:24:11.116 } 00:24:11.116 ] 00:24:11.116 } 00:24:11.116 } 00:24:11.116 }' 00:24:11.116 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:11.116 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:11.116 BaseBdev2 00:24:11.116 BaseBdev3 00:24:11.116 BaseBdev4' 00:24:11.116 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:11.116 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:11.116 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:11.375 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.375 "name": "NewBaseBdev", 00:24:11.375 "aliases": [ 00:24:11.375 "ad34ae33-428f-11ef-a0af-c98d8ee52a94" 00:24:11.375 ], 00:24:11.375 "product_name": "Malloc disk", 00:24:11.375 "block_size": 512, 00:24:11.375 "num_blocks": 65536, 00:24:11.375 "uuid": "ad34ae33-428f-11ef-a0af-c98d8ee52a94", 00:24:11.375 "assigned_rate_limits": { 00:24:11.375 "rw_ios_per_sec": 0, 00:24:11.375 "rw_mbytes_per_sec": 0, 00:24:11.375 "r_mbytes_per_sec": 0, 00:24:11.375 "w_mbytes_per_sec": 0 00:24:11.375 }, 00:24:11.375 "claimed": true, 00:24:11.375 "claim_type": "exclusive_write", 00:24:11.375 "zoned": false, 00:24:11.375 "supported_io_types": { 00:24:11.375 "read": true, 00:24:11.375 "write": true, 00:24:11.375 "unmap": true, 00:24:11.375 "flush": true, 00:24:11.375 "reset": true, 00:24:11.375 "nvme_admin": false, 00:24:11.375 "nvme_io": false, 00:24:11.375 "nvme_io_md": false, 00:24:11.375 "write_zeroes": true, 00:24:11.375 "zcopy": true, 00:24:11.375 "get_zone_info": false, 00:24:11.376 "zone_management": false, 00:24:11.376 "zone_append": false, 00:24:11.376 "compare": false, 00:24:11.376 "compare_and_write": false, 00:24:11.376 "abort": true, 00:24:11.376 "seek_hole": false, 00:24:11.376 "seek_data": false, 00:24:11.376 "copy": true, 00:24:11.376 "nvme_iov_md": false 00:24:11.376 }, 00:24:11.376 "memory_domains": [ 00:24:11.376 { 00:24:11.376 "dma_device_id": "system", 00:24:11.376 "dma_device_type": 1 00:24:11.376 }, 00:24:11.376 { 00:24:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.376 "dma_device_type": 2 00:24:11.376 } 00:24:11.376 ], 00:24:11.376 "driver_specific": {} 00:24:11.376 }' 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:11.376 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.634 "name": "BaseBdev2", 00:24:11.634 "aliases": [ 00:24:11.634 "aae3fe81-428f-11ef-a0af-c98d8ee52a94" 00:24:11.634 ], 00:24:11.634 "product_name": "Malloc disk", 00:24:11.634 "block_size": 512, 00:24:11.634 "num_blocks": 65536, 00:24:11.634 "uuid": "aae3fe81-428f-11ef-a0af-c98d8ee52a94", 00:24:11.634 "assigned_rate_limits": { 00:24:11.634 "rw_ios_per_sec": 0, 00:24:11.634 "rw_mbytes_per_sec": 0, 00:24:11.634 "r_mbytes_per_sec": 0, 00:24:11.634 "w_mbytes_per_sec": 0 00:24:11.634 }, 00:24:11.634 "claimed": true, 00:24:11.634 "claim_type": "exclusive_write", 00:24:11.634 "zoned": false, 00:24:11.634 "supported_io_types": { 00:24:11.634 "read": true, 00:24:11.634 "write": true, 00:24:11.634 "unmap": true, 00:24:11.634 "flush": true, 00:24:11.634 "reset": true, 00:24:11.634 "nvme_admin": false, 00:24:11.634 "nvme_io": false, 00:24:11.634 "nvme_io_md": false, 00:24:11.634 "write_zeroes": true, 00:24:11.634 "zcopy": true, 00:24:11.634 "get_zone_info": false, 00:24:11.634 "zone_management": false, 00:24:11.634 "zone_append": false, 00:24:11.634 "compare": false, 00:24:11.634 "compare_and_write": false, 00:24:11.634 "abort": true, 00:24:11.634 "seek_hole": false, 00:24:11.634 "seek_data": false, 00:24:11.634 "copy": true, 00:24:11.634 "nvme_iov_md": false 00:24:11.634 }, 00:24:11.634 "memory_domains": [ 00:24:11.634 { 00:24:11.634 "dma_device_id": "system", 00:24:11.634 "dma_device_type": 1 00:24:11.634 }, 00:24:11.634 { 00:24:11.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.634 "dma_device_type": 2 00:24:11.634 } 00:24:11.634 ], 00:24:11.634 "driver_specific": {} 00:24:11.634 }' 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:11.634 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:11.907 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.907 "name": "BaseBdev3", 00:24:11.907 "aliases": [ 00:24:11.907 "ab40c66d-428f-11ef-a0af-c98d8ee52a94" 00:24:11.907 ], 00:24:11.907 "product_name": "Malloc disk", 00:24:11.907 "block_size": 512, 00:24:11.907 "num_blocks": 65536, 00:24:11.907 "uuid": "ab40c66d-428f-11ef-a0af-c98d8ee52a94", 00:24:11.907 "assigned_rate_limits": { 00:24:11.907 "rw_ios_per_sec": 0, 00:24:11.907 "rw_mbytes_per_sec": 0, 00:24:11.907 "r_mbytes_per_sec": 0, 00:24:11.907 "w_mbytes_per_sec": 0 00:24:11.907 }, 00:24:11.907 "claimed": true, 00:24:11.907 "claim_type": "exclusive_write", 00:24:11.907 "zoned": false, 00:24:11.907 "supported_io_types": { 00:24:11.907 "read": true, 00:24:11.907 "write": true, 00:24:11.907 "unmap": true, 00:24:11.907 "flush": true, 00:24:11.907 "reset": true, 00:24:11.907 "nvme_admin": false, 00:24:11.907 "nvme_io": false, 00:24:11.907 "nvme_io_md": false, 00:24:11.907 "write_zeroes": true, 00:24:11.907 "zcopy": true, 00:24:11.907 "get_zone_info": false, 00:24:11.907 "zone_management": false, 00:24:11.907 "zone_append": false, 00:24:11.907 "compare": false, 00:24:11.907 "compare_and_write": false, 00:24:11.907 "abort": true, 00:24:11.907 "seek_hole": false, 00:24:11.907 "seek_data": false, 00:24:11.907 "copy": true, 00:24:11.907 "nvme_iov_md": false 00:24:11.907 }, 00:24:11.907 "memory_domains": [ 00:24:11.907 { 00:24:11.908 "dma_device_id": "system", 00:24:11.908 "dma_device_type": 1 00:24:11.908 }, 00:24:11.908 { 00:24:11.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.908 "dma_device_type": 2 00:24:11.908 } 00:24:11.908 ], 00:24:11.908 "driver_specific": {} 00:24:11.908 }' 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.908 09:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:12.183 "name": "BaseBdev4", 00:24:12.183 "aliases": [ 00:24:12.183 "abc847f9-428f-11ef-a0af-c98d8ee52a94" 00:24:12.183 ], 00:24:12.183 "product_name": "Malloc disk", 00:24:12.183 "block_size": 512, 00:24:12.183 "num_blocks": 65536, 00:24:12.183 "uuid": "abc847f9-428f-11ef-a0af-c98d8ee52a94", 00:24:12.183 "assigned_rate_limits": { 00:24:12.183 "rw_ios_per_sec": 0, 00:24:12.183 "rw_mbytes_per_sec": 0, 00:24:12.183 "r_mbytes_per_sec": 0, 00:24:12.183 "w_mbytes_per_sec": 0 00:24:12.183 }, 00:24:12.183 "claimed": true, 00:24:12.183 "claim_type": "exclusive_write", 00:24:12.183 "zoned": false, 00:24:12.183 "supported_io_types": { 00:24:12.183 "read": true, 00:24:12.183 "write": true, 00:24:12.183 "unmap": true, 00:24:12.183 "flush": true, 00:24:12.183 "reset": true, 00:24:12.183 "nvme_admin": false, 00:24:12.183 "nvme_io": false, 00:24:12.183 "nvme_io_md": false, 00:24:12.183 "write_zeroes": true, 00:24:12.183 "zcopy": true, 00:24:12.183 "get_zone_info": false, 00:24:12.183 "zone_management": false, 00:24:12.183 "zone_append": false, 00:24:12.183 "compare": false, 00:24:12.183 "compare_and_write": false, 00:24:12.183 "abort": true, 00:24:12.183 "seek_hole": false, 00:24:12.183 "seek_data": false, 00:24:12.183 "copy": true, 00:24:12.183 "nvme_iov_md": false 00:24:12.183 }, 00:24:12.183 "memory_domains": [ 00:24:12.183 { 00:24:12.183 "dma_device_id": "system", 00:24:12.183 "dma_device_type": 1 00:24:12.183 }, 00:24:12.183 { 00:24:12.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.183 "dma_device_type": 2 00:24:12.183 } 00:24:12.183 ], 00:24:12.183 "driver_specific": {} 00:24:12.183 }' 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:12.183 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:12.442 [2024-07-15 09:50:40.488901] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:12.442 [2024-07-15 09:50:40.488928] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:12.442 [2024-07-15 09:50:40.488972] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.442 [2024-07-15 09:50:40.489002] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.442 [2024-07-15 09:50:40.489007] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34688e634f00 name Existed_Raid, state offline 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58239 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58239 ']' 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58239 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58239 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:24:12.442 killing process with pid 58239 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58239' 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58239 00:24:12.442 [2024-07-15 09:50:40.516665] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:12.442 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58239 00:24:12.700 [2024-07-15 09:50:40.551306] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:12.959 00:24:12.959 real 0m25.910s 00:24:12.959 user 0m46.506s 00:24:12.959 sys 0m4.466s 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.959 ************************************ 00:24:12.959 END TEST raid_state_function_test 00:24:12.959 ************************************ 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.959 09:50:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:12.959 09:50:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:24:12.959 09:50:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:12.959 09:50:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.959 09:50:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:12.959 ************************************ 00:24:12.959 START TEST raid_state_function_test_sb 00:24:12.959 ************************************ 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59050 00:24:12.959 Process raid pid: 59050 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59050' 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59050 /var/tmp/spdk-raid.sock 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 59050 ']' 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.959 09:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.959 [2024-07-15 09:50:40.886260] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:12.959 [2024-07-15 09:50:40.886531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:24:13.525 EAL: TSC is not safe to use in SMP mode 00:24:13.525 EAL: TSC is not invariant 00:24:13.525 [2024-07-15 09:50:41.602280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.782 [2024-07-15 09:50:41.716710] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:24:13.782 [2024-07-15 09:50:41.719212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.782 [2024-07-15 09:50:41.719904] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.782 [2024-07-15 09:50:41.719916] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.782 09:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.782 09:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:24:13.782 09:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:14.040 [2024-07-15 09:50:42.035027] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:14.041 [2024-07-15 09:50:42.035088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:14.041 [2024-07-15 09:50:42.035093] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:14.041 [2024-07-15 09:50:42.035100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:14.041 [2024-07-15 09:50:42.035103] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:14.041 [2024-07-15 09:50:42.035110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:14.041 [2024-07-15 09:50:42.035113] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:14.041 [2024-07-15 09:50:42.035118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.041 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.299 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.299 "name": "Existed_Raid", 00:24:14.299 "uuid": "b2dcdcf7-428f-11ef-a0af-c98d8ee52a94", 00:24:14.299 "strip_size_kb": 64, 00:24:14.299 "state": "configuring", 00:24:14.299 "raid_level": "raid0", 00:24:14.299 "superblock": true, 00:24:14.299 "num_base_bdevs": 4, 00:24:14.299 "num_base_bdevs_discovered": 0, 00:24:14.299 "num_base_bdevs_operational": 4, 00:24:14.299 "base_bdevs_list": [ 00:24:14.299 { 00:24:14.299 "name": "BaseBdev1", 00:24:14.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.299 "is_configured": false, 00:24:14.299 "data_offset": 0, 00:24:14.299 "data_size": 0 00:24:14.299 }, 00:24:14.299 { 00:24:14.299 "name": "BaseBdev2", 00:24:14.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.299 "is_configured": false, 00:24:14.299 "data_offset": 0, 00:24:14.299 "data_size": 0 00:24:14.299 }, 00:24:14.299 { 00:24:14.299 "name": "BaseBdev3", 00:24:14.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.299 "is_configured": false, 00:24:14.299 "data_offset": 0, 00:24:14.299 "data_size": 0 00:24:14.299 }, 00:24:14.299 { 00:24:14.299 "name": "BaseBdev4", 00:24:14.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.299 "is_configured": false, 00:24:14.299 "data_offset": 0, 00:24:14.299 "data_size": 0 00:24:14.299 } 00:24:14.299 ] 00:24:14.299 }' 00:24:14.299 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.299 09:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.557 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:14.815 [2024-07-15 09:50:42.687025] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:14.815 [2024-07-15 09:50:42.687060] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf0940c34500 name Existed_Raid, state configuring 00:24:14.815 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:15.074 [2024-07-15 09:50:42.951072] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:15.074 [2024-07-15 09:50:42.951136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:15.074 [2024-07-15 09:50:42.951140] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:15.074 [2024-07-15 09:50:42.951147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:15.074 [2024-07-15 09:50:42.951151] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:15.074 [2024-07-15 09:50:42.951157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:15.074 [2024-07-15 09:50:42.951159] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:15.074 [2024-07-15 09:50:42.951165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:15.074 09:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:15.074 [2024-07-15 09:50:43.160440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.074 BaseBdev1 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:15.074 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:15.331 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:15.589 [ 00:24:15.589 { 00:24:15.589 "name": "BaseBdev1", 00:24:15.589 "aliases": [ 00:24:15.589 "b3886216-428f-11ef-a0af-c98d8ee52a94" 00:24:15.589 ], 00:24:15.589 "product_name": "Malloc disk", 00:24:15.589 "block_size": 512, 00:24:15.589 "num_blocks": 65536, 00:24:15.589 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:15.589 "assigned_rate_limits": { 00:24:15.589 "rw_ios_per_sec": 0, 00:24:15.589 "rw_mbytes_per_sec": 0, 00:24:15.589 "r_mbytes_per_sec": 0, 00:24:15.589 "w_mbytes_per_sec": 0 00:24:15.589 }, 00:24:15.589 "claimed": true, 00:24:15.589 "claim_type": "exclusive_write", 00:24:15.589 "zoned": false, 00:24:15.589 "supported_io_types": { 00:24:15.589 "read": true, 00:24:15.589 "write": true, 00:24:15.589 "unmap": true, 00:24:15.589 "flush": true, 00:24:15.589 "reset": true, 00:24:15.589 "nvme_admin": false, 00:24:15.589 "nvme_io": false, 00:24:15.589 "nvme_io_md": false, 00:24:15.589 "write_zeroes": true, 00:24:15.589 "zcopy": true, 00:24:15.589 "get_zone_info": false, 00:24:15.589 "zone_management": false, 00:24:15.589 "zone_append": false, 00:24:15.589 "compare": false, 00:24:15.589 "compare_and_write": false, 00:24:15.589 "abort": true, 00:24:15.589 "seek_hole": false, 00:24:15.589 "seek_data": false, 00:24:15.589 "copy": true, 00:24:15.589 "nvme_iov_md": false 00:24:15.589 }, 00:24:15.589 "memory_domains": [ 00:24:15.589 { 00:24:15.589 "dma_device_id": "system", 00:24:15.589 "dma_device_type": 1 00:24:15.589 }, 00:24:15.589 { 00:24:15.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.589 "dma_device_type": 2 00:24:15.589 } 00:24:15.589 ], 00:24:15.589 "driver_specific": {} 00:24:15.589 } 00:24:15.589 ] 00:24:15.589 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:15.589 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:15.589 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.590 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.848 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.848 "name": "Existed_Raid", 00:24:15.848 "uuid": "b368a3eb-428f-11ef-a0af-c98d8ee52a94", 00:24:15.848 "strip_size_kb": 64, 00:24:15.848 "state": "configuring", 00:24:15.848 "raid_level": "raid0", 00:24:15.848 "superblock": true, 00:24:15.848 "num_base_bdevs": 4, 00:24:15.848 "num_base_bdevs_discovered": 1, 00:24:15.848 "num_base_bdevs_operational": 4, 00:24:15.848 "base_bdevs_list": [ 00:24:15.848 { 00:24:15.848 "name": "BaseBdev1", 00:24:15.848 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:15.848 "is_configured": true, 00:24:15.848 "data_offset": 2048, 00:24:15.848 "data_size": 63488 00:24:15.848 }, 00:24:15.848 { 00:24:15.848 "name": "BaseBdev2", 00:24:15.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.848 "is_configured": false, 00:24:15.848 "data_offset": 0, 00:24:15.848 "data_size": 0 00:24:15.848 }, 00:24:15.848 { 00:24:15.848 "name": "BaseBdev3", 00:24:15.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.848 "is_configured": false, 00:24:15.848 "data_offset": 0, 00:24:15.848 "data_size": 0 00:24:15.848 }, 00:24:15.848 { 00:24:15.848 "name": "BaseBdev4", 00:24:15.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.848 "is_configured": false, 00:24:15.848 "data_offset": 0, 00:24:15.848 "data_size": 0 00:24:15.848 } 00:24:15.848 ] 00:24:15.848 }' 00:24:15.848 09:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.848 09:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.107 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:16.365 [2024-07-15 09:50:44.247148] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:16.365 [2024-07-15 09:50:44.247191] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf0940c34500 name Existed_Raid, state configuring 00:24:16.365 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:16.365 [2024-07-15 09:50:44.463147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:16.365 [2024-07-15 09:50:44.464113] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:16.365 [2024-07-15 09:50:44.464159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:16.365 [2024-07-15 09:50:44.464164] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:16.365 [2024-07-15 09:50:44.464171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:16.366 [2024-07-15 09:50:44.464175] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:16.366 [2024-07-15 09:50:44.464181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.624 "name": "Existed_Raid", 00:24:16.624 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:16.624 "strip_size_kb": 64, 00:24:16.624 "state": "configuring", 00:24:16.624 "raid_level": "raid0", 00:24:16.624 "superblock": true, 00:24:16.624 "num_base_bdevs": 4, 00:24:16.624 "num_base_bdevs_discovered": 1, 00:24:16.624 "num_base_bdevs_operational": 4, 00:24:16.624 "base_bdevs_list": [ 00:24:16.624 { 00:24:16.624 "name": "BaseBdev1", 00:24:16.624 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:16.624 "is_configured": true, 00:24:16.624 "data_offset": 2048, 00:24:16.624 "data_size": 63488 00:24:16.624 }, 00:24:16.624 { 00:24:16.624 "name": "BaseBdev2", 00:24:16.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.624 "is_configured": false, 00:24:16.624 "data_offset": 0, 00:24:16.624 "data_size": 0 00:24:16.624 }, 00:24:16.624 { 00:24:16.624 "name": "BaseBdev3", 00:24:16.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.624 "is_configured": false, 00:24:16.624 "data_offset": 0, 00:24:16.624 "data_size": 0 00:24:16.624 }, 00:24:16.624 { 00:24:16.624 "name": "BaseBdev4", 00:24:16.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.624 "is_configured": false, 00:24:16.624 "data_offset": 0, 00:24:16.624 "data_size": 0 00:24:16.624 } 00:24:16.624 ] 00:24:16.624 }' 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.624 09:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.883 09:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:17.141 [2024-07-15 09:50:45.139313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:17.141 BaseBdev2 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:17.141 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:17.399 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:17.658 [ 00:24:17.658 { 00:24:17.658 "name": "BaseBdev2", 00:24:17.658 "aliases": [ 00:24:17.658 "b4b68507-428f-11ef-a0af-c98d8ee52a94" 00:24:17.658 ], 00:24:17.658 "product_name": "Malloc disk", 00:24:17.658 "block_size": 512, 00:24:17.658 "num_blocks": 65536, 00:24:17.658 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:17.658 "assigned_rate_limits": { 00:24:17.658 "rw_ios_per_sec": 0, 00:24:17.658 "rw_mbytes_per_sec": 0, 00:24:17.658 "r_mbytes_per_sec": 0, 00:24:17.658 "w_mbytes_per_sec": 0 00:24:17.658 }, 00:24:17.658 "claimed": true, 00:24:17.658 "claim_type": "exclusive_write", 00:24:17.658 "zoned": false, 00:24:17.658 "supported_io_types": { 00:24:17.658 "read": true, 00:24:17.658 "write": true, 00:24:17.658 "unmap": true, 00:24:17.658 "flush": true, 00:24:17.658 "reset": true, 00:24:17.658 "nvme_admin": false, 00:24:17.658 "nvme_io": false, 00:24:17.658 "nvme_io_md": false, 00:24:17.658 "write_zeroes": true, 00:24:17.658 "zcopy": true, 00:24:17.658 "get_zone_info": false, 00:24:17.658 "zone_management": false, 00:24:17.658 "zone_append": false, 00:24:17.658 "compare": false, 00:24:17.658 "compare_and_write": false, 00:24:17.658 "abort": true, 00:24:17.658 "seek_hole": false, 00:24:17.658 "seek_data": false, 00:24:17.658 "copy": true, 00:24:17.658 "nvme_iov_md": false 00:24:17.658 }, 00:24:17.658 "memory_domains": [ 00:24:17.658 { 00:24:17.658 "dma_device_id": "system", 00:24:17.658 "dma_device_type": 1 00:24:17.658 }, 00:24:17.658 { 00:24:17.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.658 "dma_device_type": 2 00:24:17.658 } 00:24:17.658 ], 00:24:17.658 "driver_specific": {} 00:24:17.658 } 00:24:17.658 ] 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.658 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.916 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.916 "name": "Existed_Raid", 00:24:17.916 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:17.916 "strip_size_kb": 64, 00:24:17.916 "state": "configuring", 00:24:17.916 "raid_level": "raid0", 00:24:17.916 "superblock": true, 00:24:17.916 "num_base_bdevs": 4, 00:24:17.916 "num_base_bdevs_discovered": 2, 00:24:17.916 "num_base_bdevs_operational": 4, 00:24:17.916 "base_bdevs_list": [ 00:24:17.916 { 00:24:17.916 "name": "BaseBdev1", 00:24:17.916 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:17.916 "is_configured": true, 00:24:17.916 "data_offset": 2048, 00:24:17.916 "data_size": 63488 00:24:17.916 }, 00:24:17.916 { 00:24:17.916 "name": "BaseBdev2", 00:24:17.916 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:17.916 "is_configured": true, 00:24:17.916 "data_offset": 2048, 00:24:17.916 "data_size": 63488 00:24:17.916 }, 00:24:17.916 { 00:24:17.916 "name": "BaseBdev3", 00:24:17.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.916 "is_configured": false, 00:24:17.916 "data_offset": 0, 00:24:17.916 "data_size": 0 00:24:17.916 }, 00:24:17.916 { 00:24:17.916 "name": "BaseBdev4", 00:24:17.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.916 "is_configured": false, 00:24:17.916 "data_offset": 0, 00:24:17.916 "data_size": 0 00:24:17.916 } 00:24:17.916 ] 00:24:17.916 }' 00:24:17.916 09:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.916 09:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.175 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:18.434 [2024-07-15 09:50:46.359327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:18.434 BaseBdev3 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:18.434 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:18.692 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:18.692 [ 00:24:18.692 { 00:24:18.693 "name": "BaseBdev3", 00:24:18.693 "aliases": [ 00:24:18.693 "b570aff1-428f-11ef-a0af-c98d8ee52a94" 00:24:18.693 ], 00:24:18.693 "product_name": "Malloc disk", 00:24:18.693 "block_size": 512, 00:24:18.693 "num_blocks": 65536, 00:24:18.693 "uuid": "b570aff1-428f-11ef-a0af-c98d8ee52a94", 00:24:18.693 "assigned_rate_limits": { 00:24:18.693 "rw_ios_per_sec": 0, 00:24:18.693 "rw_mbytes_per_sec": 0, 00:24:18.693 "r_mbytes_per_sec": 0, 00:24:18.693 "w_mbytes_per_sec": 0 00:24:18.693 }, 00:24:18.693 "claimed": true, 00:24:18.693 "claim_type": "exclusive_write", 00:24:18.693 "zoned": false, 00:24:18.693 "supported_io_types": { 00:24:18.693 "read": true, 00:24:18.693 "write": true, 00:24:18.693 "unmap": true, 00:24:18.693 "flush": true, 00:24:18.693 "reset": true, 00:24:18.693 "nvme_admin": false, 00:24:18.693 "nvme_io": false, 00:24:18.693 "nvme_io_md": false, 00:24:18.693 "write_zeroes": true, 00:24:18.693 "zcopy": true, 00:24:18.693 "get_zone_info": false, 00:24:18.693 "zone_management": false, 00:24:18.693 "zone_append": false, 00:24:18.693 "compare": false, 00:24:18.693 "compare_and_write": false, 00:24:18.693 "abort": true, 00:24:18.693 "seek_hole": false, 00:24:18.693 "seek_data": false, 00:24:18.693 "copy": true, 00:24:18.693 "nvme_iov_md": false 00:24:18.693 }, 00:24:18.693 "memory_domains": [ 00:24:18.693 { 00:24:18.693 "dma_device_id": "system", 00:24:18.693 "dma_device_type": 1 00:24:18.693 }, 00:24:18.693 { 00:24:18.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.693 "dma_device_type": 2 00:24:18.693 } 00:24:18.693 ], 00:24:18.693 "driver_specific": {} 00:24:18.693 } 00:24:18.693 ] 00:24:18.693 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.951 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.951 "name": "Existed_Raid", 00:24:18.951 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:18.951 "strip_size_kb": 64, 00:24:18.951 "state": "configuring", 00:24:18.951 "raid_level": "raid0", 00:24:18.952 "superblock": true, 00:24:18.952 "num_base_bdevs": 4, 00:24:18.952 "num_base_bdevs_discovered": 3, 00:24:18.952 "num_base_bdevs_operational": 4, 00:24:18.952 "base_bdevs_list": [ 00:24:18.952 { 00:24:18.952 "name": "BaseBdev1", 00:24:18.952 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:18.952 "is_configured": true, 00:24:18.952 "data_offset": 2048, 00:24:18.952 "data_size": 63488 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "name": "BaseBdev2", 00:24:18.952 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:18.952 "is_configured": true, 00:24:18.952 "data_offset": 2048, 00:24:18.952 "data_size": 63488 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "name": "BaseBdev3", 00:24:18.952 "uuid": "b570aff1-428f-11ef-a0af-c98d8ee52a94", 00:24:18.952 "is_configured": true, 00:24:18.952 "data_offset": 2048, 00:24:18.952 "data_size": 63488 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "name": "BaseBdev4", 00:24:18.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.952 "is_configured": false, 00:24:18.952 "data_offset": 0, 00:24:18.952 "data_size": 0 00:24:18.952 } 00:24:18.952 ] 00:24:18.952 }' 00:24:18.952 09:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.952 09:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.210 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:19.467 [2024-07-15 09:50:47.475356] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:19.467 [2024-07-15 09:50:47.475413] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf0940c34a00 00:24:19.467 [2024-07-15 09:50:47.475418] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:19.467 [2024-07-15 09:50:47.475436] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf0940c97e20 00:24:19.467 [2024-07-15 09:50:47.475482] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf0940c34a00 00:24:19.467 [2024-07-15 09:50:47.475486] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf0940c34a00 00:24:19.467 [2024-07-15 09:50:47.475506] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.467 BaseBdev4 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:19.467 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.725 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:19.983 [ 00:24:19.983 { 00:24:19.983 "name": "BaseBdev4", 00:24:19.983 "aliases": [ 00:24:19.983 "b61afb0a-428f-11ef-a0af-c98d8ee52a94" 00:24:19.983 ], 00:24:19.983 "product_name": "Malloc disk", 00:24:19.983 "block_size": 512, 00:24:19.983 "num_blocks": 65536, 00:24:19.983 "uuid": "b61afb0a-428f-11ef-a0af-c98d8ee52a94", 00:24:19.983 "assigned_rate_limits": { 00:24:19.983 "rw_ios_per_sec": 0, 00:24:19.983 "rw_mbytes_per_sec": 0, 00:24:19.983 "r_mbytes_per_sec": 0, 00:24:19.983 "w_mbytes_per_sec": 0 00:24:19.983 }, 00:24:19.983 "claimed": true, 00:24:19.983 "claim_type": "exclusive_write", 00:24:19.983 "zoned": false, 00:24:19.983 "supported_io_types": { 00:24:19.983 "read": true, 00:24:19.983 "write": true, 00:24:19.983 "unmap": true, 00:24:19.983 "flush": true, 00:24:19.983 "reset": true, 00:24:19.983 "nvme_admin": false, 00:24:19.983 "nvme_io": false, 00:24:19.983 "nvme_io_md": false, 00:24:19.983 "write_zeroes": true, 00:24:19.983 "zcopy": true, 00:24:19.983 "get_zone_info": false, 00:24:19.983 "zone_management": false, 00:24:19.983 "zone_append": false, 00:24:19.983 "compare": false, 00:24:19.983 "compare_and_write": false, 00:24:19.983 "abort": true, 00:24:19.983 "seek_hole": false, 00:24:19.983 "seek_data": false, 00:24:19.983 "copy": true, 00:24:19.983 "nvme_iov_md": false 00:24:19.983 }, 00:24:19.983 "memory_domains": [ 00:24:19.983 { 00:24:19.983 "dma_device_id": "system", 00:24:19.983 "dma_device_type": 1 00:24:19.983 }, 00:24:19.983 { 00:24:19.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.983 "dma_device_type": 2 00:24:19.983 } 00:24:19.983 ], 00:24:19.983 "driver_specific": {} 00:24:19.983 } 00:24:19.983 ] 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.983 09:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.983 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.983 "name": "Existed_Raid", 00:24:19.983 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:19.983 "strip_size_kb": 64, 00:24:19.983 "state": "online", 00:24:19.983 "raid_level": "raid0", 00:24:19.983 "superblock": true, 00:24:19.983 "num_base_bdevs": 4, 00:24:19.983 "num_base_bdevs_discovered": 4, 00:24:19.983 "num_base_bdevs_operational": 4, 00:24:19.983 "base_bdevs_list": [ 00:24:19.983 { 00:24:19.983 "name": "BaseBdev1", 00:24:19.983 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:19.983 "is_configured": true, 00:24:19.983 "data_offset": 2048, 00:24:19.983 "data_size": 63488 00:24:19.983 }, 00:24:19.983 { 00:24:19.983 "name": "BaseBdev2", 00:24:19.983 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:19.983 "is_configured": true, 00:24:19.983 "data_offset": 2048, 00:24:19.983 "data_size": 63488 00:24:19.983 }, 00:24:19.983 { 00:24:19.983 "name": "BaseBdev3", 00:24:19.983 "uuid": "b570aff1-428f-11ef-a0af-c98d8ee52a94", 00:24:19.983 "is_configured": true, 00:24:19.983 "data_offset": 2048, 00:24:19.983 "data_size": 63488 00:24:19.983 }, 00:24:19.983 { 00:24:19.983 "name": "BaseBdev4", 00:24:19.983 "uuid": "b61afb0a-428f-11ef-a0af-c98d8ee52a94", 00:24:19.983 "is_configured": true, 00:24:19.983 "data_offset": 2048, 00:24:19.983 "data_size": 63488 00:24:19.983 } 00:24:19.983 ] 00:24:19.983 }' 00:24:19.983 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.983 09:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:20.551 [2024-07-15 09:50:48.539366] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:20.551 "name": "Existed_Raid", 00:24:20.551 "aliases": [ 00:24:20.551 "b44f5d4b-428f-11ef-a0af-c98d8ee52a94" 00:24:20.551 ], 00:24:20.551 "product_name": "Raid Volume", 00:24:20.551 "block_size": 512, 00:24:20.551 "num_blocks": 253952, 00:24:20.551 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:20.551 "assigned_rate_limits": { 00:24:20.551 "rw_ios_per_sec": 0, 00:24:20.551 "rw_mbytes_per_sec": 0, 00:24:20.551 "r_mbytes_per_sec": 0, 00:24:20.551 "w_mbytes_per_sec": 0 00:24:20.551 }, 00:24:20.551 "claimed": false, 00:24:20.551 "zoned": false, 00:24:20.551 "supported_io_types": { 00:24:20.551 "read": true, 00:24:20.551 "write": true, 00:24:20.551 "unmap": true, 00:24:20.551 "flush": true, 00:24:20.551 "reset": true, 00:24:20.551 "nvme_admin": false, 00:24:20.551 "nvme_io": false, 00:24:20.551 "nvme_io_md": false, 00:24:20.551 "write_zeroes": true, 00:24:20.551 "zcopy": false, 00:24:20.551 "get_zone_info": false, 00:24:20.551 "zone_management": false, 00:24:20.551 "zone_append": false, 00:24:20.551 "compare": false, 00:24:20.551 "compare_and_write": false, 00:24:20.551 "abort": false, 00:24:20.551 "seek_hole": false, 00:24:20.551 "seek_data": false, 00:24:20.551 "copy": false, 00:24:20.551 "nvme_iov_md": false 00:24:20.551 }, 00:24:20.551 "memory_domains": [ 00:24:20.551 { 00:24:20.551 "dma_device_id": "system", 00:24:20.551 "dma_device_type": 1 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.551 "dma_device_type": 2 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "system", 00:24:20.551 "dma_device_type": 1 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.551 "dma_device_type": 2 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "system", 00:24:20.551 "dma_device_type": 1 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.551 "dma_device_type": 2 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "system", 00:24:20.551 "dma_device_type": 1 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.551 "dma_device_type": 2 00:24:20.551 } 00:24:20.551 ], 00:24:20.551 "driver_specific": { 00:24:20.551 "raid": { 00:24:20.551 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:20.551 "strip_size_kb": 64, 00:24:20.551 "state": "online", 00:24:20.551 "raid_level": "raid0", 00:24:20.551 "superblock": true, 00:24:20.551 "num_base_bdevs": 4, 00:24:20.551 "num_base_bdevs_discovered": 4, 00:24:20.551 "num_base_bdevs_operational": 4, 00:24:20.551 "base_bdevs_list": [ 00:24:20.551 { 00:24:20.551 "name": "BaseBdev1", 00:24:20.551 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:20.551 "is_configured": true, 00:24:20.551 "data_offset": 2048, 00:24:20.551 "data_size": 63488 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "name": "BaseBdev2", 00:24:20.551 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:20.551 "is_configured": true, 00:24:20.551 "data_offset": 2048, 00:24:20.551 "data_size": 63488 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "name": "BaseBdev3", 00:24:20.551 "uuid": "b570aff1-428f-11ef-a0af-c98d8ee52a94", 00:24:20.551 "is_configured": true, 00:24:20.551 "data_offset": 2048, 00:24:20.551 "data_size": 63488 00:24:20.551 }, 00:24:20.551 { 00:24:20.551 "name": "BaseBdev4", 00:24:20.551 "uuid": "b61afb0a-428f-11ef-a0af-c98d8ee52a94", 00:24:20.551 "is_configured": true, 00:24:20.551 "data_offset": 2048, 00:24:20.551 "data_size": 63488 00:24:20.551 } 00:24:20.551 ] 00:24:20.551 } 00:24:20.551 } 00:24:20.551 }' 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:20.551 BaseBdev2 00:24:20.551 BaseBdev3 00:24:20.551 BaseBdev4' 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:20.551 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:20.810 "name": "BaseBdev1", 00:24:20.810 "aliases": [ 00:24:20.810 "b3886216-428f-11ef-a0af-c98d8ee52a94" 00:24:20.810 ], 00:24:20.810 "product_name": "Malloc disk", 00:24:20.810 "block_size": 512, 00:24:20.810 "num_blocks": 65536, 00:24:20.810 "uuid": "b3886216-428f-11ef-a0af-c98d8ee52a94", 00:24:20.810 "assigned_rate_limits": { 00:24:20.810 "rw_ios_per_sec": 0, 00:24:20.810 "rw_mbytes_per_sec": 0, 00:24:20.810 "r_mbytes_per_sec": 0, 00:24:20.810 "w_mbytes_per_sec": 0 00:24:20.810 }, 00:24:20.810 "claimed": true, 00:24:20.810 "claim_type": "exclusive_write", 00:24:20.810 "zoned": false, 00:24:20.810 "supported_io_types": { 00:24:20.810 "read": true, 00:24:20.810 "write": true, 00:24:20.810 "unmap": true, 00:24:20.810 "flush": true, 00:24:20.810 "reset": true, 00:24:20.810 "nvme_admin": false, 00:24:20.810 "nvme_io": false, 00:24:20.810 "nvme_io_md": false, 00:24:20.810 "write_zeroes": true, 00:24:20.810 "zcopy": true, 00:24:20.810 "get_zone_info": false, 00:24:20.810 "zone_management": false, 00:24:20.810 "zone_append": false, 00:24:20.810 "compare": false, 00:24:20.810 "compare_and_write": false, 00:24:20.810 "abort": true, 00:24:20.810 "seek_hole": false, 00:24:20.810 "seek_data": false, 00:24:20.810 "copy": true, 00:24:20.810 "nvme_iov_md": false 00:24:20.810 }, 00:24:20.810 "memory_domains": [ 00:24:20.810 { 00:24:20.810 "dma_device_id": "system", 00:24:20.810 "dma_device_type": 1 00:24:20.810 }, 00:24:20.810 { 00:24:20.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.810 "dma_device_type": 2 00:24:20.810 } 00:24:20.810 ], 00:24:20.810 "driver_specific": {} 00:24:20.810 }' 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:20.810 09:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:21.068 "name": "BaseBdev2", 00:24:21.068 "aliases": [ 00:24:21.068 "b4b68507-428f-11ef-a0af-c98d8ee52a94" 00:24:21.068 ], 00:24:21.068 "product_name": "Malloc disk", 00:24:21.068 "block_size": 512, 00:24:21.068 "num_blocks": 65536, 00:24:21.068 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:21.068 "assigned_rate_limits": { 00:24:21.068 "rw_ios_per_sec": 0, 00:24:21.068 "rw_mbytes_per_sec": 0, 00:24:21.068 "r_mbytes_per_sec": 0, 00:24:21.068 "w_mbytes_per_sec": 0 00:24:21.068 }, 00:24:21.068 "claimed": true, 00:24:21.068 "claim_type": "exclusive_write", 00:24:21.068 "zoned": false, 00:24:21.068 "supported_io_types": { 00:24:21.068 "read": true, 00:24:21.068 "write": true, 00:24:21.068 "unmap": true, 00:24:21.068 "flush": true, 00:24:21.068 "reset": true, 00:24:21.068 "nvme_admin": false, 00:24:21.068 "nvme_io": false, 00:24:21.068 "nvme_io_md": false, 00:24:21.068 "write_zeroes": true, 00:24:21.068 "zcopy": true, 00:24:21.068 "get_zone_info": false, 00:24:21.068 "zone_management": false, 00:24:21.068 "zone_append": false, 00:24:21.068 "compare": false, 00:24:21.068 "compare_and_write": false, 00:24:21.068 "abort": true, 00:24:21.068 "seek_hole": false, 00:24:21.068 "seek_data": false, 00:24:21.068 "copy": true, 00:24:21.068 "nvme_iov_md": false 00:24:21.068 }, 00:24:21.068 "memory_domains": [ 00:24:21.068 { 00:24:21.068 "dma_device_id": "system", 00:24:21.068 "dma_device_type": 1 00:24:21.068 }, 00:24:21.068 { 00:24:21.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.068 "dma_device_type": 2 00:24:21.068 } 00:24:21.068 ], 00:24:21.068 "driver_specific": {} 00:24:21.068 }' 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.068 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.069 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:21.069 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:21.069 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:21.327 "name": "BaseBdev3", 00:24:21.327 "aliases": [ 00:24:21.327 "b570aff1-428f-11ef-a0af-c98d8ee52a94" 00:24:21.327 ], 00:24:21.327 "product_name": "Malloc disk", 00:24:21.327 "block_size": 512, 00:24:21.327 "num_blocks": 65536, 00:24:21.327 "uuid": "b570aff1-428f-11ef-a0af-c98d8ee52a94", 00:24:21.327 "assigned_rate_limits": { 00:24:21.327 "rw_ios_per_sec": 0, 00:24:21.327 "rw_mbytes_per_sec": 0, 00:24:21.327 "r_mbytes_per_sec": 0, 00:24:21.327 "w_mbytes_per_sec": 0 00:24:21.327 }, 00:24:21.327 "claimed": true, 00:24:21.327 "claim_type": "exclusive_write", 00:24:21.327 "zoned": false, 00:24:21.327 "supported_io_types": { 00:24:21.327 "read": true, 00:24:21.327 "write": true, 00:24:21.327 "unmap": true, 00:24:21.327 "flush": true, 00:24:21.327 "reset": true, 00:24:21.327 "nvme_admin": false, 00:24:21.327 "nvme_io": false, 00:24:21.327 "nvme_io_md": false, 00:24:21.327 "write_zeroes": true, 00:24:21.327 "zcopy": true, 00:24:21.327 "get_zone_info": false, 00:24:21.327 "zone_management": false, 00:24:21.327 "zone_append": false, 00:24:21.327 "compare": false, 00:24:21.327 "compare_and_write": false, 00:24:21.327 "abort": true, 00:24:21.327 "seek_hole": false, 00:24:21.327 "seek_data": false, 00:24:21.327 "copy": true, 00:24:21.327 "nvme_iov_md": false 00:24:21.327 }, 00:24:21.327 "memory_domains": [ 00:24:21.327 { 00:24:21.327 "dma_device_id": "system", 00:24:21.327 "dma_device_type": 1 00:24:21.327 }, 00:24:21.327 { 00:24:21.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.327 "dma_device_type": 2 00:24:21.327 } 00:24:21.327 ], 00:24:21.327 "driver_specific": {} 00:24:21.327 }' 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.327 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:21.587 "name": "BaseBdev4", 00:24:21.587 "aliases": [ 00:24:21.587 "b61afb0a-428f-11ef-a0af-c98d8ee52a94" 00:24:21.587 ], 00:24:21.587 "product_name": "Malloc disk", 00:24:21.587 "block_size": 512, 00:24:21.587 "num_blocks": 65536, 00:24:21.587 "uuid": "b61afb0a-428f-11ef-a0af-c98d8ee52a94", 00:24:21.587 "assigned_rate_limits": { 00:24:21.587 "rw_ios_per_sec": 0, 00:24:21.587 "rw_mbytes_per_sec": 0, 00:24:21.587 "r_mbytes_per_sec": 0, 00:24:21.587 "w_mbytes_per_sec": 0 00:24:21.587 }, 00:24:21.587 "claimed": true, 00:24:21.587 "claim_type": "exclusive_write", 00:24:21.587 "zoned": false, 00:24:21.587 "supported_io_types": { 00:24:21.587 "read": true, 00:24:21.587 "write": true, 00:24:21.587 "unmap": true, 00:24:21.587 "flush": true, 00:24:21.587 "reset": true, 00:24:21.587 "nvme_admin": false, 00:24:21.587 "nvme_io": false, 00:24:21.587 "nvme_io_md": false, 00:24:21.587 "write_zeroes": true, 00:24:21.587 "zcopy": true, 00:24:21.587 "get_zone_info": false, 00:24:21.587 "zone_management": false, 00:24:21.587 "zone_append": false, 00:24:21.587 "compare": false, 00:24:21.587 "compare_and_write": false, 00:24:21.587 "abort": true, 00:24:21.587 "seek_hole": false, 00:24:21.587 "seek_data": false, 00:24:21.587 "copy": true, 00:24:21.587 "nvme_iov_md": false 00:24:21.587 }, 00:24:21.587 "memory_domains": [ 00:24:21.587 { 00:24:21.587 "dma_device_id": "system", 00:24:21.587 "dma_device_type": 1 00:24:21.587 }, 00:24:21.587 { 00:24:21.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.587 "dma_device_type": 2 00:24:21.587 } 00:24:21.587 ], 00:24:21.587 "driver_specific": {} 00:24:21.587 }' 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.587 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:21.847 [2024-07-15 09:50:49.903414] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:21.847 [2024-07-15 09:50:49.903443] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:21.847 [2024-07-15 09:50:49.903457] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.847 09:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.105 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.105 "name": "Existed_Raid", 00:24:22.106 "uuid": "b44f5d4b-428f-11ef-a0af-c98d8ee52a94", 00:24:22.106 "strip_size_kb": 64, 00:24:22.106 "state": "offline", 00:24:22.106 "raid_level": "raid0", 00:24:22.106 "superblock": true, 00:24:22.106 "num_base_bdevs": 4, 00:24:22.106 "num_base_bdevs_discovered": 3, 00:24:22.106 "num_base_bdevs_operational": 3, 00:24:22.106 "base_bdevs_list": [ 00:24:22.106 { 00:24:22.106 "name": null, 00:24:22.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.106 "is_configured": false, 00:24:22.106 "data_offset": 2048, 00:24:22.106 "data_size": 63488 00:24:22.106 }, 00:24:22.106 { 00:24:22.106 "name": "BaseBdev2", 00:24:22.106 "uuid": "b4b68507-428f-11ef-a0af-c98d8ee52a94", 00:24:22.106 "is_configured": true, 00:24:22.106 "data_offset": 2048, 00:24:22.106 "data_size": 63488 00:24:22.106 }, 00:24:22.106 { 00:24:22.106 "name": "BaseBdev3", 00:24:22.106 "uuid": "b570aff1-428f-11ef-a0af-c98d8ee52a94", 00:24:22.106 "is_configured": true, 00:24:22.106 "data_offset": 2048, 00:24:22.106 "data_size": 63488 00:24:22.106 }, 00:24:22.106 { 00:24:22.106 "name": "BaseBdev4", 00:24:22.106 "uuid": "b61afb0a-428f-11ef-a0af-c98d8ee52a94", 00:24:22.106 "is_configured": true, 00:24:22.106 "data_offset": 2048, 00:24:22.106 "data_size": 63488 00:24:22.106 } 00:24:22.106 ] 00:24:22.106 }' 00:24:22.106 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.106 09:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.386 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:22.386 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:22.386 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.386 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:22.644 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:22.644 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:22.644 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:22.902 [2024-07-15 09:50:50.819738] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:22.902 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:22.902 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:22.902 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:22.902 09:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:23.160 [2024-07-15 09:50:51.220168] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.160 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:23.418 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:23.418 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.418 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:23.677 [2024-07-15 09:50:51.628818] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:23.677 [2024-07-15 09:50:51.628847] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf0940c34a00 name Existed_Raid, state offline 00:24:23.677 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:23.677 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:23.677 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.677 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:23.937 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:23.937 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:23.937 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:23.937 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:23.937 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:23.937 09:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:23.937 BaseBdev2 00:24:24.197 09:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:24.198 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:24.458 [ 00:24:24.458 { 00:24:24.458 "name": "BaseBdev2", 00:24:24.458 "aliases": [ 00:24:24.458 "b8d1deb0-428f-11ef-a0af-c98d8ee52a94" 00:24:24.458 ], 00:24:24.458 "product_name": "Malloc disk", 00:24:24.458 "block_size": 512, 00:24:24.458 "num_blocks": 65536, 00:24:24.458 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:24.458 "assigned_rate_limits": { 00:24:24.458 "rw_ios_per_sec": 0, 00:24:24.458 "rw_mbytes_per_sec": 0, 00:24:24.458 "r_mbytes_per_sec": 0, 00:24:24.458 "w_mbytes_per_sec": 0 00:24:24.458 }, 00:24:24.458 "claimed": false, 00:24:24.458 "zoned": false, 00:24:24.458 "supported_io_types": { 00:24:24.458 "read": true, 00:24:24.458 "write": true, 00:24:24.458 "unmap": true, 00:24:24.458 "flush": true, 00:24:24.458 "reset": true, 00:24:24.458 "nvme_admin": false, 00:24:24.458 "nvme_io": false, 00:24:24.458 "nvme_io_md": false, 00:24:24.458 "write_zeroes": true, 00:24:24.458 "zcopy": true, 00:24:24.458 "get_zone_info": false, 00:24:24.458 "zone_management": false, 00:24:24.458 "zone_append": false, 00:24:24.458 "compare": false, 00:24:24.458 "compare_and_write": false, 00:24:24.458 "abort": true, 00:24:24.458 "seek_hole": false, 00:24:24.458 "seek_data": false, 00:24:24.458 "copy": true, 00:24:24.458 "nvme_iov_md": false 00:24:24.458 }, 00:24:24.458 "memory_domains": [ 00:24:24.458 { 00:24:24.458 "dma_device_id": "system", 00:24:24.458 "dma_device_type": 1 00:24:24.458 }, 00:24:24.458 { 00:24:24.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.458 "dma_device_type": 2 00:24:24.458 } 00:24:24.458 ], 00:24:24.458 "driver_specific": {} 00:24:24.458 } 00:24:24.458 ] 00:24:24.458 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:24.458 09:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:24.458 09:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:24.458 09:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:24.718 BaseBdev3 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:24.718 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:24.977 09:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:24.977 [ 00:24:24.977 { 00:24:24.977 "name": "BaseBdev3", 00:24:24.977 "aliases": [ 00:24:24.977 "b932eb2c-428f-11ef-a0af-c98d8ee52a94" 00:24:24.977 ], 00:24:24.977 "product_name": "Malloc disk", 00:24:24.977 "block_size": 512, 00:24:24.977 "num_blocks": 65536, 00:24:24.977 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:24.977 "assigned_rate_limits": { 00:24:24.977 "rw_ios_per_sec": 0, 00:24:24.977 "rw_mbytes_per_sec": 0, 00:24:24.977 "r_mbytes_per_sec": 0, 00:24:24.977 "w_mbytes_per_sec": 0 00:24:24.977 }, 00:24:24.977 "claimed": false, 00:24:24.977 "zoned": false, 00:24:24.977 "supported_io_types": { 00:24:24.977 "read": true, 00:24:24.977 "write": true, 00:24:24.977 "unmap": true, 00:24:24.977 "flush": true, 00:24:24.977 "reset": true, 00:24:24.977 "nvme_admin": false, 00:24:24.977 "nvme_io": false, 00:24:24.977 "nvme_io_md": false, 00:24:24.977 "write_zeroes": true, 00:24:24.977 "zcopy": true, 00:24:24.977 "get_zone_info": false, 00:24:24.977 "zone_management": false, 00:24:24.977 "zone_append": false, 00:24:24.977 "compare": false, 00:24:24.977 "compare_and_write": false, 00:24:24.977 "abort": true, 00:24:24.977 "seek_hole": false, 00:24:24.977 "seek_data": false, 00:24:24.977 "copy": true, 00:24:24.977 "nvme_iov_md": false 00:24:24.977 }, 00:24:24.977 "memory_domains": [ 00:24:24.977 { 00:24:24.977 "dma_device_id": "system", 00:24:24.977 "dma_device_type": 1 00:24:24.977 }, 00:24:24.977 { 00:24:24.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.977 "dma_device_type": 2 00:24:24.977 } 00:24:24.977 ], 00:24:24.977 "driver_specific": {} 00:24:24.977 } 00:24:24.977 ] 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:25.237 BaseBdev4 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:25.237 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:25.500 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:25.760 [ 00:24:25.760 { 00:24:25.760 "name": "BaseBdev4", 00:24:25.760 "aliases": [ 00:24:25.760 "b98c08ce-428f-11ef-a0af-c98d8ee52a94" 00:24:25.760 ], 00:24:25.760 "product_name": "Malloc disk", 00:24:25.760 "block_size": 512, 00:24:25.760 "num_blocks": 65536, 00:24:25.760 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:25.760 "assigned_rate_limits": { 00:24:25.760 "rw_ios_per_sec": 0, 00:24:25.760 "rw_mbytes_per_sec": 0, 00:24:25.760 "r_mbytes_per_sec": 0, 00:24:25.760 "w_mbytes_per_sec": 0 00:24:25.760 }, 00:24:25.760 "claimed": false, 00:24:25.760 "zoned": false, 00:24:25.760 "supported_io_types": { 00:24:25.760 "read": true, 00:24:25.760 "write": true, 00:24:25.760 "unmap": true, 00:24:25.760 "flush": true, 00:24:25.760 "reset": true, 00:24:25.760 "nvme_admin": false, 00:24:25.760 "nvme_io": false, 00:24:25.760 "nvme_io_md": false, 00:24:25.760 "write_zeroes": true, 00:24:25.760 "zcopy": true, 00:24:25.760 "get_zone_info": false, 00:24:25.760 "zone_management": false, 00:24:25.760 "zone_append": false, 00:24:25.760 "compare": false, 00:24:25.760 "compare_and_write": false, 00:24:25.760 "abort": true, 00:24:25.760 "seek_hole": false, 00:24:25.760 "seek_data": false, 00:24:25.760 "copy": true, 00:24:25.760 "nvme_iov_md": false 00:24:25.760 }, 00:24:25.760 "memory_domains": [ 00:24:25.760 { 00:24:25.760 "dma_device_id": "system", 00:24:25.760 "dma_device_type": 1 00:24:25.760 }, 00:24:25.760 { 00:24:25.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.760 "dma_device_type": 2 00:24:25.760 } 00:24:25.760 ], 00:24:25.760 "driver_specific": {} 00:24:25.760 } 00:24:25.760 ] 00:24:25.760 09:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:25.760 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:25.760 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:25.760 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:26.019 [2024-07-15 09:50:53.945391] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:26.019 [2024-07-15 09:50:53.945449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:26.019 [2024-07-15 09:50:53.945456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:26.019 [2024-07-15 09:50:53.946056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.019 [2024-07-15 09:50:53.946079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.019 09:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.278 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.278 "name": "Existed_Raid", 00:24:26.278 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:26.278 "strip_size_kb": 64, 00:24:26.278 "state": "configuring", 00:24:26.278 "raid_level": "raid0", 00:24:26.278 "superblock": true, 00:24:26.278 "num_base_bdevs": 4, 00:24:26.278 "num_base_bdevs_discovered": 3, 00:24:26.278 "num_base_bdevs_operational": 4, 00:24:26.278 "base_bdevs_list": [ 00:24:26.278 { 00:24:26.278 "name": "BaseBdev1", 00:24:26.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.278 "is_configured": false, 00:24:26.278 "data_offset": 0, 00:24:26.278 "data_size": 0 00:24:26.278 }, 00:24:26.278 { 00:24:26.278 "name": "BaseBdev2", 00:24:26.279 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:26.279 "is_configured": true, 00:24:26.279 "data_offset": 2048, 00:24:26.279 "data_size": 63488 00:24:26.279 }, 00:24:26.279 { 00:24:26.279 "name": "BaseBdev3", 00:24:26.279 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:26.279 "is_configured": true, 00:24:26.279 "data_offset": 2048, 00:24:26.279 "data_size": 63488 00:24:26.279 }, 00:24:26.279 { 00:24:26.279 "name": "BaseBdev4", 00:24:26.279 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:26.279 "is_configured": true, 00:24:26.279 "data_offset": 2048, 00:24:26.279 "data_size": 63488 00:24:26.279 } 00:24:26.279 ] 00:24:26.279 }' 00:24:26.279 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.279 09:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.539 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:26.798 [2024-07-15 09:50:54.709416] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.798 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.058 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.058 "name": "Existed_Raid", 00:24:27.058 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:27.058 "strip_size_kb": 64, 00:24:27.058 "state": "configuring", 00:24:27.058 "raid_level": "raid0", 00:24:27.058 "superblock": true, 00:24:27.058 "num_base_bdevs": 4, 00:24:27.058 "num_base_bdevs_discovered": 2, 00:24:27.058 "num_base_bdevs_operational": 4, 00:24:27.058 "base_bdevs_list": [ 00:24:27.058 { 00:24:27.058 "name": "BaseBdev1", 00:24:27.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.058 "is_configured": false, 00:24:27.058 "data_offset": 0, 00:24:27.058 "data_size": 0 00:24:27.058 }, 00:24:27.058 { 00:24:27.058 "name": null, 00:24:27.058 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:27.058 "is_configured": false, 00:24:27.058 "data_offset": 2048, 00:24:27.058 "data_size": 63488 00:24:27.058 }, 00:24:27.058 { 00:24:27.058 "name": "BaseBdev3", 00:24:27.058 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:27.058 "is_configured": true, 00:24:27.058 "data_offset": 2048, 00:24:27.058 "data_size": 63488 00:24:27.058 }, 00:24:27.058 { 00:24:27.058 "name": "BaseBdev4", 00:24:27.058 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:27.058 "is_configured": true, 00:24:27.058 "data_offset": 2048, 00:24:27.058 "data_size": 63488 00:24:27.058 } 00:24:27.058 ] 00:24:27.058 }' 00:24:27.058 09:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.058 09:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.316 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.316 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:27.316 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:27.316 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:27.577 [2024-07-15 09:50:55.573617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.577 BaseBdev1 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:27.577 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:27.836 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:28.094 [ 00:24:28.094 { 00:24:28.094 "name": "BaseBdev1", 00:24:28.094 "aliases": [ 00:24:28.094 "baeeabae-428f-11ef-a0af-c98d8ee52a94" 00:24:28.094 ], 00:24:28.094 "product_name": "Malloc disk", 00:24:28.094 "block_size": 512, 00:24:28.094 "num_blocks": 65536, 00:24:28.094 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:28.094 "assigned_rate_limits": { 00:24:28.094 "rw_ios_per_sec": 0, 00:24:28.094 "rw_mbytes_per_sec": 0, 00:24:28.094 "r_mbytes_per_sec": 0, 00:24:28.094 "w_mbytes_per_sec": 0 00:24:28.094 }, 00:24:28.094 "claimed": true, 00:24:28.094 "claim_type": "exclusive_write", 00:24:28.094 "zoned": false, 00:24:28.094 "supported_io_types": { 00:24:28.094 "read": true, 00:24:28.094 "write": true, 00:24:28.094 "unmap": true, 00:24:28.094 "flush": true, 00:24:28.094 "reset": true, 00:24:28.094 "nvme_admin": false, 00:24:28.094 "nvme_io": false, 00:24:28.094 "nvme_io_md": false, 00:24:28.094 "write_zeroes": true, 00:24:28.094 "zcopy": true, 00:24:28.094 "get_zone_info": false, 00:24:28.094 "zone_management": false, 00:24:28.094 "zone_append": false, 00:24:28.094 "compare": false, 00:24:28.094 "compare_and_write": false, 00:24:28.094 "abort": true, 00:24:28.094 "seek_hole": false, 00:24:28.094 "seek_data": false, 00:24:28.094 "copy": true, 00:24:28.094 "nvme_iov_md": false 00:24:28.094 }, 00:24:28.094 "memory_domains": [ 00:24:28.094 { 00:24:28.094 "dma_device_id": "system", 00:24:28.094 "dma_device_type": 1 00:24:28.094 }, 00:24:28.094 { 00:24:28.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.094 "dma_device_type": 2 00:24:28.094 } 00:24:28.094 ], 00:24:28.094 "driver_specific": {} 00:24:28.094 } 00:24:28.094 ] 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.094 09:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.353 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.353 "name": "Existed_Raid", 00:24:28.353 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:28.353 "strip_size_kb": 64, 00:24:28.353 "state": "configuring", 00:24:28.353 "raid_level": "raid0", 00:24:28.353 "superblock": true, 00:24:28.353 "num_base_bdevs": 4, 00:24:28.353 "num_base_bdevs_discovered": 3, 00:24:28.353 "num_base_bdevs_operational": 4, 00:24:28.353 "base_bdevs_list": [ 00:24:28.353 { 00:24:28.353 "name": "BaseBdev1", 00:24:28.353 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:28.353 "is_configured": true, 00:24:28.353 "data_offset": 2048, 00:24:28.353 "data_size": 63488 00:24:28.353 }, 00:24:28.353 { 00:24:28.353 "name": null, 00:24:28.353 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:28.353 "is_configured": false, 00:24:28.353 "data_offset": 2048, 00:24:28.353 "data_size": 63488 00:24:28.353 }, 00:24:28.353 { 00:24:28.353 "name": "BaseBdev3", 00:24:28.353 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:28.353 "is_configured": true, 00:24:28.353 "data_offset": 2048, 00:24:28.353 "data_size": 63488 00:24:28.353 }, 00:24:28.353 { 00:24:28.353 "name": "BaseBdev4", 00:24:28.353 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:28.353 "is_configured": true, 00:24:28.353 "data_offset": 2048, 00:24:28.353 "data_size": 63488 00:24:28.353 } 00:24:28.353 ] 00:24:28.353 }' 00:24:28.354 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.354 09:50:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.612 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.612 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:28.612 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:28.612 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:28.871 [2024-07-15 09:50:56.869550] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.871 09:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.130 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.130 "name": "Existed_Raid", 00:24:29.130 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:29.130 "strip_size_kb": 64, 00:24:29.130 "state": "configuring", 00:24:29.130 "raid_level": "raid0", 00:24:29.130 "superblock": true, 00:24:29.130 "num_base_bdevs": 4, 00:24:29.130 "num_base_bdevs_discovered": 2, 00:24:29.130 "num_base_bdevs_operational": 4, 00:24:29.130 "base_bdevs_list": [ 00:24:29.130 { 00:24:29.130 "name": "BaseBdev1", 00:24:29.130 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:29.130 "is_configured": true, 00:24:29.130 "data_offset": 2048, 00:24:29.130 "data_size": 63488 00:24:29.130 }, 00:24:29.130 { 00:24:29.130 "name": null, 00:24:29.130 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:29.130 "is_configured": false, 00:24:29.130 "data_offset": 2048, 00:24:29.130 "data_size": 63488 00:24:29.130 }, 00:24:29.130 { 00:24:29.130 "name": null, 00:24:29.130 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:29.130 "is_configured": false, 00:24:29.130 "data_offset": 2048, 00:24:29.130 "data_size": 63488 00:24:29.130 }, 00:24:29.130 { 00:24:29.130 "name": "BaseBdev4", 00:24:29.130 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:29.130 "is_configured": true, 00:24:29.130 "data_offset": 2048, 00:24:29.130 "data_size": 63488 00:24:29.130 } 00:24:29.130 ] 00:24:29.130 }' 00:24:29.130 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.130 09:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.389 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.389 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:29.648 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:29.648 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:29.648 [2024-07-15 09:50:57.741658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.961 "name": "Existed_Raid", 00:24:29.961 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:29.961 "strip_size_kb": 64, 00:24:29.961 "state": "configuring", 00:24:29.961 "raid_level": "raid0", 00:24:29.961 "superblock": true, 00:24:29.961 "num_base_bdevs": 4, 00:24:29.961 "num_base_bdevs_discovered": 3, 00:24:29.961 "num_base_bdevs_operational": 4, 00:24:29.961 "base_bdevs_list": [ 00:24:29.961 { 00:24:29.961 "name": "BaseBdev1", 00:24:29.961 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:29.961 "is_configured": true, 00:24:29.961 "data_offset": 2048, 00:24:29.961 "data_size": 63488 00:24:29.961 }, 00:24:29.961 { 00:24:29.961 "name": null, 00:24:29.961 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:29.961 "is_configured": false, 00:24:29.961 "data_offset": 2048, 00:24:29.961 "data_size": 63488 00:24:29.961 }, 00:24:29.961 { 00:24:29.961 "name": "BaseBdev3", 00:24:29.961 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:29.961 "is_configured": true, 00:24:29.961 "data_offset": 2048, 00:24:29.961 "data_size": 63488 00:24:29.961 }, 00:24:29.961 { 00:24:29.961 "name": "BaseBdev4", 00:24:29.961 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:29.961 "is_configured": true, 00:24:29.961 "data_offset": 2048, 00:24:29.961 "data_size": 63488 00:24:29.961 } 00:24:29.961 ] 00:24:29.961 }' 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.961 09:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.219 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.219 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:30.477 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:30.477 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:30.735 [2024-07-15 09:50:58.637802] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.735 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.993 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.993 "name": "Existed_Raid", 00:24:30.993 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:30.993 "strip_size_kb": 64, 00:24:30.993 "state": "configuring", 00:24:30.993 "raid_level": "raid0", 00:24:30.993 "superblock": true, 00:24:30.993 "num_base_bdevs": 4, 00:24:30.993 "num_base_bdevs_discovered": 2, 00:24:30.993 "num_base_bdevs_operational": 4, 00:24:30.993 "base_bdevs_list": [ 00:24:30.993 { 00:24:30.993 "name": null, 00:24:30.993 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:30.993 "is_configured": false, 00:24:30.993 "data_offset": 2048, 00:24:30.993 "data_size": 63488 00:24:30.993 }, 00:24:30.993 { 00:24:30.993 "name": null, 00:24:30.993 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:30.993 "is_configured": false, 00:24:30.993 "data_offset": 2048, 00:24:30.993 "data_size": 63488 00:24:30.993 }, 00:24:30.993 { 00:24:30.993 "name": "BaseBdev3", 00:24:30.993 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:30.993 "is_configured": true, 00:24:30.993 "data_offset": 2048, 00:24:30.993 "data_size": 63488 00:24:30.993 }, 00:24:30.993 { 00:24:30.993 "name": "BaseBdev4", 00:24:30.993 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:30.993 "is_configured": true, 00:24:30.993 "data_offset": 2048, 00:24:30.993 "data_size": 63488 00:24:30.993 } 00:24:30.993 ] 00:24:30.993 }' 00:24:30.993 09:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.993 09:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.251 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.251 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:31.251 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:31.251 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:31.568 [2024-07-15 09:50:59.510852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.568 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.827 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.827 "name": "Existed_Raid", 00:24:31.827 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:31.827 "strip_size_kb": 64, 00:24:31.827 "state": "configuring", 00:24:31.827 "raid_level": "raid0", 00:24:31.827 "superblock": true, 00:24:31.827 "num_base_bdevs": 4, 00:24:31.827 "num_base_bdevs_discovered": 3, 00:24:31.827 "num_base_bdevs_operational": 4, 00:24:31.827 "base_bdevs_list": [ 00:24:31.827 { 00:24:31.827 "name": null, 00:24:31.827 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:31.827 "is_configured": false, 00:24:31.827 "data_offset": 2048, 00:24:31.827 "data_size": 63488 00:24:31.827 }, 00:24:31.827 { 00:24:31.827 "name": "BaseBdev2", 00:24:31.827 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:31.827 "is_configured": true, 00:24:31.827 "data_offset": 2048, 00:24:31.827 "data_size": 63488 00:24:31.827 }, 00:24:31.827 { 00:24:31.827 "name": "BaseBdev3", 00:24:31.827 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:31.827 "is_configured": true, 00:24:31.827 "data_offset": 2048, 00:24:31.827 "data_size": 63488 00:24:31.827 }, 00:24:31.827 { 00:24:31.827 "name": "BaseBdev4", 00:24:31.827 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:31.827 "is_configured": true, 00:24:31.827 "data_offset": 2048, 00:24:31.827 "data_size": 63488 00:24:31.827 } 00:24:31.827 ] 00:24:31.827 }' 00:24:31.827 09:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.827 09:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.104 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.104 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:32.361 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:32.361 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:32.361 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.361 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u baeeabae-428f-11ef-a0af-c98d8ee52a94 00:24:32.617 [2024-07-15 09:51:00.615080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:32.617 [2024-07-15 09:51:00.615136] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf0940c34f00 00:24:32.617 [2024-07-15 09:51:00.615141] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:32.617 [2024-07-15 09:51:00.615161] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf0940c97e20 00:24:32.617 [2024-07-15 09:51:00.615199] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf0940c34f00 00:24:32.617 [2024-07-15 09:51:00.615203] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf0940c34f00 00:24:32.617 [2024-07-15 09:51:00.615219] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.617 NewBaseBdev 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:32.617 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:32.874 09:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:33.131 [ 00:24:33.131 { 00:24:33.131 "name": "NewBaseBdev", 00:24:33.131 "aliases": [ 00:24:33.131 "baeeabae-428f-11ef-a0af-c98d8ee52a94" 00:24:33.131 ], 00:24:33.131 "product_name": "Malloc disk", 00:24:33.131 "block_size": 512, 00:24:33.131 "num_blocks": 65536, 00:24:33.131 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:33.131 "assigned_rate_limits": { 00:24:33.131 "rw_ios_per_sec": 0, 00:24:33.131 "rw_mbytes_per_sec": 0, 00:24:33.131 "r_mbytes_per_sec": 0, 00:24:33.131 "w_mbytes_per_sec": 0 00:24:33.131 }, 00:24:33.131 "claimed": true, 00:24:33.131 "claim_type": "exclusive_write", 00:24:33.131 "zoned": false, 00:24:33.131 "supported_io_types": { 00:24:33.131 "read": true, 00:24:33.131 "write": true, 00:24:33.131 "unmap": true, 00:24:33.131 "flush": true, 00:24:33.131 "reset": true, 00:24:33.131 "nvme_admin": false, 00:24:33.131 "nvme_io": false, 00:24:33.131 "nvme_io_md": false, 00:24:33.131 "write_zeroes": true, 00:24:33.131 "zcopy": true, 00:24:33.131 "get_zone_info": false, 00:24:33.131 "zone_management": false, 00:24:33.131 "zone_append": false, 00:24:33.131 "compare": false, 00:24:33.131 "compare_and_write": false, 00:24:33.131 "abort": true, 00:24:33.131 "seek_hole": false, 00:24:33.131 "seek_data": false, 00:24:33.131 "copy": true, 00:24:33.131 "nvme_iov_md": false 00:24:33.131 }, 00:24:33.131 "memory_domains": [ 00:24:33.131 { 00:24:33.131 "dma_device_id": "system", 00:24:33.131 "dma_device_type": 1 00:24:33.131 }, 00:24:33.131 { 00:24:33.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.131 "dma_device_type": 2 00:24:33.131 } 00:24:33.131 ], 00:24:33.131 "driver_specific": {} 00:24:33.131 } 00:24:33.131 ] 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.132 "name": "Existed_Raid", 00:24:33.132 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:33.132 "strip_size_kb": 64, 00:24:33.132 "state": "online", 00:24:33.132 "raid_level": "raid0", 00:24:33.132 "superblock": true, 00:24:33.132 "num_base_bdevs": 4, 00:24:33.132 "num_base_bdevs_discovered": 4, 00:24:33.132 "num_base_bdevs_operational": 4, 00:24:33.132 "base_bdevs_list": [ 00:24:33.132 { 00:24:33.132 "name": "NewBaseBdev", 00:24:33.132 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:33.132 "is_configured": true, 00:24:33.132 "data_offset": 2048, 00:24:33.132 "data_size": 63488 00:24:33.132 }, 00:24:33.132 { 00:24:33.132 "name": "BaseBdev2", 00:24:33.132 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:33.132 "is_configured": true, 00:24:33.132 "data_offset": 2048, 00:24:33.132 "data_size": 63488 00:24:33.132 }, 00:24:33.132 { 00:24:33.132 "name": "BaseBdev3", 00:24:33.132 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:33.132 "is_configured": true, 00:24:33.132 "data_offset": 2048, 00:24:33.132 "data_size": 63488 00:24:33.132 }, 00:24:33.132 { 00:24:33.132 "name": "BaseBdev4", 00:24:33.132 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:33.132 "is_configured": true, 00:24:33.132 "data_offset": 2048, 00:24:33.132 "data_size": 63488 00:24:33.132 } 00:24:33.132 ] 00:24:33.132 }' 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.132 09:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.390 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:33.390 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:33.390 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:33.390 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:33.390 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:33.390 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:33.648 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:33.648 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:33.648 [2024-07-15 09:51:01.743090] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:33.990 "name": "Existed_Raid", 00:24:33.990 "aliases": [ 00:24:33.990 "b9f63da2-428f-11ef-a0af-c98d8ee52a94" 00:24:33.990 ], 00:24:33.990 "product_name": "Raid Volume", 00:24:33.990 "block_size": 512, 00:24:33.990 "num_blocks": 253952, 00:24:33.990 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "assigned_rate_limits": { 00:24:33.990 "rw_ios_per_sec": 0, 00:24:33.990 "rw_mbytes_per_sec": 0, 00:24:33.990 "r_mbytes_per_sec": 0, 00:24:33.990 "w_mbytes_per_sec": 0 00:24:33.990 }, 00:24:33.990 "claimed": false, 00:24:33.990 "zoned": false, 00:24:33.990 "supported_io_types": { 00:24:33.990 "read": true, 00:24:33.990 "write": true, 00:24:33.990 "unmap": true, 00:24:33.990 "flush": true, 00:24:33.990 "reset": true, 00:24:33.990 "nvme_admin": false, 00:24:33.990 "nvme_io": false, 00:24:33.990 "nvme_io_md": false, 00:24:33.990 "write_zeroes": true, 00:24:33.990 "zcopy": false, 00:24:33.990 "get_zone_info": false, 00:24:33.990 "zone_management": false, 00:24:33.990 "zone_append": false, 00:24:33.990 "compare": false, 00:24:33.990 "compare_and_write": false, 00:24:33.990 "abort": false, 00:24:33.990 "seek_hole": false, 00:24:33.990 "seek_data": false, 00:24:33.990 "copy": false, 00:24:33.990 "nvme_iov_md": false 00:24:33.990 }, 00:24:33.990 "memory_domains": [ 00:24:33.990 { 00:24:33.990 "dma_device_id": "system", 00:24:33.990 "dma_device_type": 1 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.990 "dma_device_type": 2 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "system", 00:24:33.990 "dma_device_type": 1 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.990 "dma_device_type": 2 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "system", 00:24:33.990 "dma_device_type": 1 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.990 "dma_device_type": 2 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "system", 00:24:33.990 "dma_device_type": 1 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.990 "dma_device_type": 2 00:24:33.990 } 00:24:33.990 ], 00:24:33.990 "driver_specific": { 00:24:33.990 "raid": { 00:24:33.990 "uuid": "b9f63da2-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "strip_size_kb": 64, 00:24:33.990 "state": "online", 00:24:33.990 "raid_level": "raid0", 00:24:33.990 "superblock": true, 00:24:33.990 "num_base_bdevs": 4, 00:24:33.990 "num_base_bdevs_discovered": 4, 00:24:33.990 "num_base_bdevs_operational": 4, 00:24:33.990 "base_bdevs_list": [ 00:24:33.990 { 00:24:33.990 "name": "NewBaseBdev", 00:24:33.990 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "is_configured": true, 00:24:33.990 "data_offset": 2048, 00:24:33.990 "data_size": 63488 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "name": "BaseBdev2", 00:24:33.990 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "is_configured": true, 00:24:33.990 "data_offset": 2048, 00:24:33.990 "data_size": 63488 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "name": "BaseBdev3", 00:24:33.990 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "is_configured": true, 00:24:33.990 "data_offset": 2048, 00:24:33.990 "data_size": 63488 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "name": "BaseBdev4", 00:24:33.990 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "is_configured": true, 00:24:33.990 "data_offset": 2048, 00:24:33.990 "data_size": 63488 00:24:33.990 } 00:24:33.990 ] 00:24:33.990 } 00:24:33.990 } 00:24:33.990 }' 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:33.990 BaseBdev2 00:24:33.990 BaseBdev3 00:24:33.990 BaseBdev4' 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:33.990 "name": "NewBaseBdev", 00:24:33.990 "aliases": [ 00:24:33.990 "baeeabae-428f-11ef-a0af-c98d8ee52a94" 00:24:33.990 ], 00:24:33.990 "product_name": "Malloc disk", 00:24:33.990 "block_size": 512, 00:24:33.990 "num_blocks": 65536, 00:24:33.990 "uuid": "baeeabae-428f-11ef-a0af-c98d8ee52a94", 00:24:33.990 "assigned_rate_limits": { 00:24:33.990 "rw_ios_per_sec": 0, 00:24:33.990 "rw_mbytes_per_sec": 0, 00:24:33.990 "r_mbytes_per_sec": 0, 00:24:33.990 "w_mbytes_per_sec": 0 00:24:33.990 }, 00:24:33.990 "claimed": true, 00:24:33.990 "claim_type": "exclusive_write", 00:24:33.990 "zoned": false, 00:24:33.990 "supported_io_types": { 00:24:33.990 "read": true, 00:24:33.990 "write": true, 00:24:33.990 "unmap": true, 00:24:33.990 "flush": true, 00:24:33.990 "reset": true, 00:24:33.990 "nvme_admin": false, 00:24:33.990 "nvme_io": false, 00:24:33.990 "nvme_io_md": false, 00:24:33.990 "write_zeroes": true, 00:24:33.990 "zcopy": true, 00:24:33.990 "get_zone_info": false, 00:24:33.990 "zone_management": false, 00:24:33.990 "zone_append": false, 00:24:33.990 "compare": false, 00:24:33.990 "compare_and_write": false, 00:24:33.990 "abort": true, 00:24:33.990 "seek_hole": false, 00:24:33.990 "seek_data": false, 00:24:33.990 "copy": true, 00:24:33.990 "nvme_iov_md": false 00:24:33.990 }, 00:24:33.990 "memory_domains": [ 00:24:33.990 { 00:24:33.990 "dma_device_id": "system", 00:24:33.990 "dma_device_type": 1 00:24:33.990 }, 00:24:33.990 { 00:24:33.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.990 "dma_device_type": 2 00:24:33.990 } 00:24:33.990 ], 00:24:33.990 "driver_specific": {} 00:24:33.990 }' 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:33.990 09:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:33.990 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:33.991 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:34.261 "name": "BaseBdev2", 00:24:34.261 "aliases": [ 00:24:34.261 "b8d1deb0-428f-11ef-a0af-c98d8ee52a94" 00:24:34.261 ], 00:24:34.261 "product_name": "Malloc disk", 00:24:34.261 "block_size": 512, 00:24:34.261 "num_blocks": 65536, 00:24:34.261 "uuid": "b8d1deb0-428f-11ef-a0af-c98d8ee52a94", 00:24:34.261 "assigned_rate_limits": { 00:24:34.261 "rw_ios_per_sec": 0, 00:24:34.261 "rw_mbytes_per_sec": 0, 00:24:34.261 "r_mbytes_per_sec": 0, 00:24:34.261 "w_mbytes_per_sec": 0 00:24:34.261 }, 00:24:34.261 "claimed": true, 00:24:34.261 "claim_type": "exclusive_write", 00:24:34.261 "zoned": false, 00:24:34.261 "supported_io_types": { 00:24:34.261 "read": true, 00:24:34.261 "write": true, 00:24:34.261 "unmap": true, 00:24:34.261 "flush": true, 00:24:34.261 "reset": true, 00:24:34.261 "nvme_admin": false, 00:24:34.261 "nvme_io": false, 00:24:34.261 "nvme_io_md": false, 00:24:34.261 "write_zeroes": true, 00:24:34.261 "zcopy": true, 00:24:34.261 "get_zone_info": false, 00:24:34.261 "zone_management": false, 00:24:34.261 "zone_append": false, 00:24:34.261 "compare": false, 00:24:34.261 "compare_and_write": false, 00:24:34.261 "abort": true, 00:24:34.261 "seek_hole": false, 00:24:34.261 "seek_data": false, 00:24:34.261 "copy": true, 00:24:34.261 "nvme_iov_md": false 00:24:34.261 }, 00:24:34.261 "memory_domains": [ 00:24:34.261 { 00:24:34.261 "dma_device_id": "system", 00:24:34.261 "dma_device_type": 1 00:24:34.261 }, 00:24:34.261 { 00:24:34.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.261 "dma_device_type": 2 00:24:34.261 } 00:24:34.261 ], 00:24:34.261 "driver_specific": {} 00:24:34.261 }' 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:34.261 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:34.520 "name": "BaseBdev3", 00:24:34.520 "aliases": [ 00:24:34.520 "b932eb2c-428f-11ef-a0af-c98d8ee52a94" 00:24:34.520 ], 00:24:34.520 "product_name": "Malloc disk", 00:24:34.520 "block_size": 512, 00:24:34.520 "num_blocks": 65536, 00:24:34.520 "uuid": "b932eb2c-428f-11ef-a0af-c98d8ee52a94", 00:24:34.520 "assigned_rate_limits": { 00:24:34.520 "rw_ios_per_sec": 0, 00:24:34.520 "rw_mbytes_per_sec": 0, 00:24:34.520 "r_mbytes_per_sec": 0, 00:24:34.520 "w_mbytes_per_sec": 0 00:24:34.520 }, 00:24:34.520 "claimed": true, 00:24:34.520 "claim_type": "exclusive_write", 00:24:34.520 "zoned": false, 00:24:34.520 "supported_io_types": { 00:24:34.520 "read": true, 00:24:34.520 "write": true, 00:24:34.520 "unmap": true, 00:24:34.520 "flush": true, 00:24:34.520 "reset": true, 00:24:34.520 "nvme_admin": false, 00:24:34.520 "nvme_io": false, 00:24:34.520 "nvme_io_md": false, 00:24:34.520 "write_zeroes": true, 00:24:34.520 "zcopy": true, 00:24:34.520 "get_zone_info": false, 00:24:34.520 "zone_management": false, 00:24:34.520 "zone_append": false, 00:24:34.520 "compare": false, 00:24:34.520 "compare_and_write": false, 00:24:34.520 "abort": true, 00:24:34.520 "seek_hole": false, 00:24:34.520 "seek_data": false, 00:24:34.520 "copy": true, 00:24:34.520 "nvme_iov_md": false 00:24:34.520 }, 00:24:34.520 "memory_domains": [ 00:24:34.520 { 00:24:34.520 "dma_device_id": "system", 00:24:34.520 "dma_device_type": 1 00:24:34.520 }, 00:24:34.520 { 00:24:34.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.520 "dma_device_type": 2 00:24:34.520 } 00:24:34.520 ], 00:24:34.520 "driver_specific": {} 00:24:34.520 }' 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.520 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:34.780 "name": "BaseBdev4", 00:24:34.780 "aliases": [ 00:24:34.780 "b98c08ce-428f-11ef-a0af-c98d8ee52a94" 00:24:34.780 ], 00:24:34.780 "product_name": "Malloc disk", 00:24:34.780 "block_size": 512, 00:24:34.780 "num_blocks": 65536, 00:24:34.780 "uuid": "b98c08ce-428f-11ef-a0af-c98d8ee52a94", 00:24:34.780 "assigned_rate_limits": { 00:24:34.780 "rw_ios_per_sec": 0, 00:24:34.780 "rw_mbytes_per_sec": 0, 00:24:34.780 "r_mbytes_per_sec": 0, 00:24:34.780 "w_mbytes_per_sec": 0 00:24:34.780 }, 00:24:34.780 "claimed": true, 00:24:34.780 "claim_type": "exclusive_write", 00:24:34.780 "zoned": false, 00:24:34.780 "supported_io_types": { 00:24:34.780 "read": true, 00:24:34.780 "write": true, 00:24:34.780 "unmap": true, 00:24:34.780 "flush": true, 00:24:34.780 "reset": true, 00:24:34.780 "nvme_admin": false, 00:24:34.780 "nvme_io": false, 00:24:34.780 "nvme_io_md": false, 00:24:34.780 "write_zeroes": true, 00:24:34.780 "zcopy": true, 00:24:34.780 "get_zone_info": false, 00:24:34.780 "zone_management": false, 00:24:34.780 "zone_append": false, 00:24:34.780 "compare": false, 00:24:34.780 "compare_and_write": false, 00:24:34.780 "abort": true, 00:24:34.780 "seek_hole": false, 00:24:34.780 "seek_data": false, 00:24:34.780 "copy": true, 00:24:34.780 "nvme_iov_md": false 00:24:34.780 }, 00:24:34.780 "memory_domains": [ 00:24:34.780 { 00:24:34.780 "dma_device_id": "system", 00:24:34.780 "dma_device_type": 1 00:24:34.780 }, 00:24:34.780 { 00:24:34.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.780 "dma_device_type": 2 00:24:34.780 } 00:24:34.780 ], 00:24:34.780 "driver_specific": {} 00:24:34.780 }' 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.780 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.039 09:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:35.039 [2024-07-15 09:51:03.107151] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:35.039 [2024-07-15 09:51:03.107182] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:35.039 [2024-07-15 09:51:03.107209] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.039 [2024-07-15 09:51:03.107227] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.039 [2024-07-15 09:51:03.107232] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf0940c34f00 name Existed_Raid, state offline 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59050 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 59050 ']' 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 59050 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 59050 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:24:35.039 killing process with pid 59050 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59050' 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 59050 00:24:35.039 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 59050 00:24:35.039 [2024-07-15 09:51:03.137931] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.300 [2024-07-15 09:51:03.172217] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.558 09:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:35.559 00:24:35.559 real 0m22.560s 00:24:35.559 user 0m40.093s 00:24:35.559 sys 0m4.227s 00:24:35.559 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.559 ************************************ 00:24:35.559 END TEST raid_state_function_test_sb 00:24:35.559 ************************************ 00:24:35.559 09:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.559 09:51:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:35.559 09:51:03 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:35.559 09:51:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:35.559 09:51:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.559 09:51:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.559 ************************************ 00:24:35.559 START TEST raid_superblock_test 00:24:35.559 ************************************ 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=59848 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 59848 /var/tmp/spdk-raid.sock 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 59848 ']' 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.559 09:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.559 [2024-07-15 09:51:03.507727] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:35.559 [2024-07-15 09:51:03.508208] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:24:36.125 EAL: TSC is not safe to use in SMP mode 00:24:36.125 EAL: TSC is not invariant 00:24:36.125 [2024-07-15 09:51:03.944956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.125 [2024-07-15 09:51:04.059710] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:24:36.125 [2024-07-15 09:51:04.062144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.125 [2024-07-15 09:51:04.062850] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.125 [2024-07-15 09:51:04.062862] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:36.383 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:36.642 malloc1 00:24:36.642 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:36.900 [2024-07-15 09:51:04.865846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:36.900 [2024-07-15 09:51:04.865919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.900 [2024-07-15 09:51:04.865931] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0634780 00:24:36.900 [2024-07-15 09:51:04.865939] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.900 [2024-07-15 09:51:04.866999] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.900 [2024-07-15 09:51:04.867037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:36.900 pt1 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:36.900 09:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:37.158 malloc2 00:24:37.158 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:37.447 [2024-07-15 09:51:05.269869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:37.447 [2024-07-15 09:51:05.269948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.447 [2024-07-15 09:51:05.269958] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0634c80 00:24:37.447 [2024-07-15 09:51:05.269966] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.447 [2024-07-15 09:51:05.270778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.447 [2024-07-15 09:51:05.270810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:37.447 pt2 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:37.447 malloc3 00:24:37.447 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:37.705 [2024-07-15 09:51:05.729897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:37.705 [2024-07-15 09:51:05.729962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.705 [2024-07-15 09:51:05.729972] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0635180 00:24:37.705 [2024-07-15 09:51:05.729980] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.705 [2024-07-15 09:51:05.730673] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.705 [2024-07-15 09:51:05.730708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:37.705 pt3 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.705 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:37.964 malloc4 00:24:37.964 09:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:38.222 [2024-07-15 09:51:06.145922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:38.222 [2024-07-15 09:51:06.145990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.222 [2024-07-15 09:51:06.146000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0635680 00:24:38.222 [2024-07-15 09:51:06.146007] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.222 [2024-07-15 09:51:06.146680] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.222 [2024-07-15 09:51:06.146710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:38.222 pt4 00:24:38.222 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:38.222 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:38.222 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:38.482 [2024-07-15 09:51:06.341942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:38.482 [2024-07-15 09:51:06.342573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:38.482 [2024-07-15 09:51:06.342595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:38.482 [2024-07-15 09:51:06.342606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:38.482 [2024-07-15 09:51:06.342670] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2cd5a0635900 00:24:38.482 [2024-07-15 09:51:06.342676] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:38.482 [2024-07-15 09:51:06.342710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2cd5a0697e20 00:24:38.482 [2024-07-15 09:51:06.342780] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2cd5a0635900 00:24:38.482 [2024-07-15 09:51:06.342784] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2cd5a0635900 00:24:38.482 [2024-07-15 09:51:06.342803] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.482 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.742 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.742 "name": "raid_bdev1", 00:24:38.742 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:38.742 "strip_size_kb": 64, 00:24:38.742 "state": "online", 00:24:38.742 "raid_level": "raid0", 00:24:38.742 "superblock": true, 00:24:38.742 "num_base_bdevs": 4, 00:24:38.742 "num_base_bdevs_discovered": 4, 00:24:38.742 "num_base_bdevs_operational": 4, 00:24:38.742 "base_bdevs_list": [ 00:24:38.742 { 00:24:38.742 "name": "pt1", 00:24:38.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.742 "is_configured": true, 00:24:38.742 "data_offset": 2048, 00:24:38.742 "data_size": 63488 00:24:38.742 }, 00:24:38.742 { 00:24:38.742 "name": "pt2", 00:24:38.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.742 "is_configured": true, 00:24:38.742 "data_offset": 2048, 00:24:38.742 "data_size": 63488 00:24:38.742 }, 00:24:38.742 { 00:24:38.742 "name": "pt3", 00:24:38.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:38.742 "is_configured": true, 00:24:38.742 "data_offset": 2048, 00:24:38.742 "data_size": 63488 00:24:38.742 }, 00:24:38.742 { 00:24:38.742 "name": "pt4", 00:24:38.742 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:38.742 "is_configured": true, 00:24:38.742 "data_offset": 2048, 00:24:38.742 "data_size": 63488 00:24:38.742 } 00:24:38.742 ] 00:24:38.742 }' 00:24:38.742 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.742 09:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:39.001 09:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:39.260 [2024-07-15 09:51:07.113997] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.260 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:39.260 "name": "raid_bdev1", 00:24:39.260 "aliases": [ 00:24:39.260 "c159cdcb-428f-11ef-a0af-c98d8ee52a94" 00:24:39.260 ], 00:24:39.260 "product_name": "Raid Volume", 00:24:39.260 "block_size": 512, 00:24:39.260 "num_blocks": 253952, 00:24:39.260 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:39.260 "assigned_rate_limits": { 00:24:39.260 "rw_ios_per_sec": 0, 00:24:39.260 "rw_mbytes_per_sec": 0, 00:24:39.260 "r_mbytes_per_sec": 0, 00:24:39.260 "w_mbytes_per_sec": 0 00:24:39.260 }, 00:24:39.260 "claimed": false, 00:24:39.260 "zoned": false, 00:24:39.260 "supported_io_types": { 00:24:39.260 "read": true, 00:24:39.260 "write": true, 00:24:39.260 "unmap": true, 00:24:39.260 "flush": true, 00:24:39.260 "reset": true, 00:24:39.260 "nvme_admin": false, 00:24:39.260 "nvme_io": false, 00:24:39.260 "nvme_io_md": false, 00:24:39.260 "write_zeroes": true, 00:24:39.260 "zcopy": false, 00:24:39.260 "get_zone_info": false, 00:24:39.260 "zone_management": false, 00:24:39.260 "zone_append": false, 00:24:39.260 "compare": false, 00:24:39.260 "compare_and_write": false, 00:24:39.260 "abort": false, 00:24:39.260 "seek_hole": false, 00:24:39.260 "seek_data": false, 00:24:39.260 "copy": false, 00:24:39.260 "nvme_iov_md": false 00:24:39.260 }, 00:24:39.260 "memory_domains": [ 00:24:39.260 { 00:24:39.260 "dma_device_id": "system", 00:24:39.260 "dma_device_type": 1 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.260 "dma_device_type": 2 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "system", 00:24:39.260 "dma_device_type": 1 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.260 "dma_device_type": 2 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "system", 00:24:39.260 "dma_device_type": 1 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.260 "dma_device_type": 2 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "system", 00:24:39.260 "dma_device_type": 1 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.260 "dma_device_type": 2 00:24:39.260 } 00:24:39.260 ], 00:24:39.260 "driver_specific": { 00:24:39.260 "raid": { 00:24:39.260 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:39.260 "strip_size_kb": 64, 00:24:39.260 "state": "online", 00:24:39.260 "raid_level": "raid0", 00:24:39.260 "superblock": true, 00:24:39.260 "num_base_bdevs": 4, 00:24:39.260 "num_base_bdevs_discovered": 4, 00:24:39.260 "num_base_bdevs_operational": 4, 00:24:39.260 "base_bdevs_list": [ 00:24:39.260 { 00:24:39.260 "name": "pt1", 00:24:39.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.260 "is_configured": true, 00:24:39.260 "data_offset": 2048, 00:24:39.260 "data_size": 63488 00:24:39.260 }, 00:24:39.260 { 00:24:39.260 "name": "pt2", 00:24:39.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.260 "is_configured": true, 00:24:39.261 "data_offset": 2048, 00:24:39.261 "data_size": 63488 00:24:39.261 }, 00:24:39.261 { 00:24:39.261 "name": "pt3", 00:24:39.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:39.261 "is_configured": true, 00:24:39.261 "data_offset": 2048, 00:24:39.261 "data_size": 63488 00:24:39.261 }, 00:24:39.261 { 00:24:39.261 "name": "pt4", 00:24:39.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:39.261 "is_configured": true, 00:24:39.261 "data_offset": 2048, 00:24:39.261 "data_size": 63488 00:24:39.261 } 00:24:39.261 ] 00:24:39.261 } 00:24:39.261 } 00:24:39.261 }' 00:24:39.261 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:39.261 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:39.261 pt2 00:24:39.261 pt3 00:24:39.261 pt4' 00:24:39.261 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.261 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:39.261 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.521 "name": "pt1", 00:24:39.521 "aliases": [ 00:24:39.521 "00000000-0000-0000-0000-000000000001" 00:24:39.521 ], 00:24:39.521 "product_name": "passthru", 00:24:39.521 "block_size": 512, 00:24:39.521 "num_blocks": 65536, 00:24:39.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.521 "assigned_rate_limits": { 00:24:39.521 "rw_ios_per_sec": 0, 00:24:39.521 "rw_mbytes_per_sec": 0, 00:24:39.521 "r_mbytes_per_sec": 0, 00:24:39.521 "w_mbytes_per_sec": 0 00:24:39.521 }, 00:24:39.521 "claimed": true, 00:24:39.521 "claim_type": "exclusive_write", 00:24:39.521 "zoned": false, 00:24:39.521 "supported_io_types": { 00:24:39.521 "read": true, 00:24:39.521 "write": true, 00:24:39.521 "unmap": true, 00:24:39.521 "flush": true, 00:24:39.521 "reset": true, 00:24:39.521 "nvme_admin": false, 00:24:39.521 "nvme_io": false, 00:24:39.521 "nvme_io_md": false, 00:24:39.521 "write_zeroes": true, 00:24:39.521 "zcopy": true, 00:24:39.521 "get_zone_info": false, 00:24:39.521 "zone_management": false, 00:24:39.521 "zone_append": false, 00:24:39.521 "compare": false, 00:24:39.521 "compare_and_write": false, 00:24:39.521 "abort": true, 00:24:39.521 "seek_hole": false, 00:24:39.521 "seek_data": false, 00:24:39.521 "copy": true, 00:24:39.521 "nvme_iov_md": false 00:24:39.521 }, 00:24:39.521 "memory_domains": [ 00:24:39.521 { 00:24:39.521 "dma_device_id": "system", 00:24:39.521 "dma_device_type": 1 00:24:39.521 }, 00:24:39.521 { 00:24:39.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.521 "dma_device_type": 2 00:24:39.521 } 00:24:39.521 ], 00:24:39.521 "driver_specific": { 00:24:39.521 "passthru": { 00:24:39.521 "name": "pt1", 00:24:39.521 "base_bdev_name": "malloc1" 00:24:39.521 } 00:24:39.521 } 00:24:39.521 }' 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:39.521 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.781 "name": "pt2", 00:24:39.781 "aliases": [ 00:24:39.781 "00000000-0000-0000-0000-000000000002" 00:24:39.781 ], 00:24:39.781 "product_name": "passthru", 00:24:39.781 "block_size": 512, 00:24:39.781 "num_blocks": 65536, 00:24:39.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.781 "assigned_rate_limits": { 00:24:39.781 "rw_ios_per_sec": 0, 00:24:39.781 "rw_mbytes_per_sec": 0, 00:24:39.781 "r_mbytes_per_sec": 0, 00:24:39.781 "w_mbytes_per_sec": 0 00:24:39.781 }, 00:24:39.781 "claimed": true, 00:24:39.781 "claim_type": "exclusive_write", 00:24:39.781 "zoned": false, 00:24:39.781 "supported_io_types": { 00:24:39.781 "read": true, 00:24:39.781 "write": true, 00:24:39.781 "unmap": true, 00:24:39.781 "flush": true, 00:24:39.781 "reset": true, 00:24:39.781 "nvme_admin": false, 00:24:39.781 "nvme_io": false, 00:24:39.781 "nvme_io_md": false, 00:24:39.781 "write_zeroes": true, 00:24:39.781 "zcopy": true, 00:24:39.781 "get_zone_info": false, 00:24:39.781 "zone_management": false, 00:24:39.781 "zone_append": false, 00:24:39.781 "compare": false, 00:24:39.781 "compare_and_write": false, 00:24:39.781 "abort": true, 00:24:39.781 "seek_hole": false, 00:24:39.781 "seek_data": false, 00:24:39.781 "copy": true, 00:24:39.781 "nvme_iov_md": false 00:24:39.781 }, 00:24:39.781 "memory_domains": [ 00:24:39.781 { 00:24:39.781 "dma_device_id": "system", 00:24:39.781 "dma_device_type": 1 00:24:39.781 }, 00:24:39.781 { 00:24:39.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.781 "dma_device_type": 2 00:24:39.781 } 00:24:39.781 ], 00:24:39.781 "driver_specific": { 00:24:39.781 "passthru": { 00:24:39.781 "name": "pt2", 00:24:39.781 "base_bdev_name": "malloc2" 00:24:39.781 } 00:24:39.781 } 00:24:39.781 }' 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:39.781 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:40.040 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:40.040 "name": "pt3", 00:24:40.040 "aliases": [ 00:24:40.040 "00000000-0000-0000-0000-000000000003" 00:24:40.040 ], 00:24:40.040 "product_name": "passthru", 00:24:40.040 "block_size": 512, 00:24:40.040 "num_blocks": 65536, 00:24:40.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:40.040 "assigned_rate_limits": { 00:24:40.040 "rw_ios_per_sec": 0, 00:24:40.040 "rw_mbytes_per_sec": 0, 00:24:40.040 "r_mbytes_per_sec": 0, 00:24:40.040 "w_mbytes_per_sec": 0 00:24:40.040 }, 00:24:40.040 "claimed": true, 00:24:40.040 "claim_type": "exclusive_write", 00:24:40.040 "zoned": false, 00:24:40.040 "supported_io_types": { 00:24:40.040 "read": true, 00:24:40.040 "write": true, 00:24:40.040 "unmap": true, 00:24:40.040 "flush": true, 00:24:40.040 "reset": true, 00:24:40.040 "nvme_admin": false, 00:24:40.040 "nvme_io": false, 00:24:40.040 "nvme_io_md": false, 00:24:40.040 "write_zeroes": true, 00:24:40.041 "zcopy": true, 00:24:40.041 "get_zone_info": false, 00:24:40.041 "zone_management": false, 00:24:40.041 "zone_append": false, 00:24:40.041 "compare": false, 00:24:40.041 "compare_and_write": false, 00:24:40.041 "abort": true, 00:24:40.041 "seek_hole": false, 00:24:40.041 "seek_data": false, 00:24:40.041 "copy": true, 00:24:40.041 "nvme_iov_md": false 00:24:40.041 }, 00:24:40.041 "memory_domains": [ 00:24:40.041 { 00:24:40.041 "dma_device_id": "system", 00:24:40.041 "dma_device_type": 1 00:24:40.041 }, 00:24:40.041 { 00:24:40.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.041 "dma_device_type": 2 00:24:40.041 } 00:24:40.041 ], 00:24:40.041 "driver_specific": { 00:24:40.041 "passthru": { 00:24:40.041 "name": "pt3", 00:24:40.041 "base_bdev_name": "malloc3" 00:24:40.041 } 00:24:40.041 } 00:24:40.041 }' 00:24:40.041 09:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:40.041 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:40.300 "name": "pt4", 00:24:40.300 "aliases": [ 00:24:40.300 "00000000-0000-0000-0000-000000000004" 00:24:40.300 ], 00:24:40.300 "product_name": "passthru", 00:24:40.300 "block_size": 512, 00:24:40.300 "num_blocks": 65536, 00:24:40.300 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:40.300 "assigned_rate_limits": { 00:24:40.300 "rw_ios_per_sec": 0, 00:24:40.300 "rw_mbytes_per_sec": 0, 00:24:40.300 "r_mbytes_per_sec": 0, 00:24:40.300 "w_mbytes_per_sec": 0 00:24:40.300 }, 00:24:40.300 "claimed": true, 00:24:40.300 "claim_type": "exclusive_write", 00:24:40.300 "zoned": false, 00:24:40.300 "supported_io_types": { 00:24:40.300 "read": true, 00:24:40.300 "write": true, 00:24:40.300 "unmap": true, 00:24:40.300 "flush": true, 00:24:40.300 "reset": true, 00:24:40.300 "nvme_admin": false, 00:24:40.300 "nvme_io": false, 00:24:40.300 "nvme_io_md": false, 00:24:40.300 "write_zeroes": true, 00:24:40.300 "zcopy": true, 00:24:40.300 "get_zone_info": false, 00:24:40.300 "zone_management": false, 00:24:40.300 "zone_append": false, 00:24:40.300 "compare": false, 00:24:40.300 "compare_and_write": false, 00:24:40.300 "abort": true, 00:24:40.300 "seek_hole": false, 00:24:40.300 "seek_data": false, 00:24:40.300 "copy": true, 00:24:40.300 "nvme_iov_md": false 00:24:40.300 }, 00:24:40.300 "memory_domains": [ 00:24:40.300 { 00:24:40.300 "dma_device_id": "system", 00:24:40.300 "dma_device_type": 1 00:24:40.300 }, 00:24:40.300 { 00:24:40.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.300 "dma_device_type": 2 00:24:40.300 } 00:24:40.300 ], 00:24:40.300 "driver_specific": { 00:24:40.300 "passthru": { 00:24:40.300 "name": "pt4", 00:24:40.300 "base_bdev_name": "malloc4" 00:24:40.300 } 00:24:40.300 } 00:24:40.300 }' 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.300 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.559 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.559 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:40.559 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:40.559 [2024-07-15 09:51:08.606087] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:40.559 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c159cdcb-428f-11ef-a0af-c98d8ee52a94 00:24:40.559 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c159cdcb-428f-11ef-a0af-c98d8ee52a94 ']' 00:24:40.559 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:40.818 [2024-07-15 09:51:08.806055] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:40.818 [2024-07-15 09:51:08.806080] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.818 [2024-07-15 09:51:08.806099] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.818 [2024-07-15 09:51:08.806116] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.818 [2024-07-15 09:51:08.806120] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2cd5a0635900 name raid_bdev1, state offline 00:24:40.818 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.818 09:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:41.077 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:41.077 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:41.077 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:41.077 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:41.336 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:41.336 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:41.595 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:41.595 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:41.595 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:41.595 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:41.853 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:41.853 09:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:42.111 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:42.370 [2024-07-15 09:51:10.302200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:42.370 [2024-07-15 09:51:10.302881] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:42.370 [2024-07-15 09:51:10.302901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:42.370 [2024-07-15 09:51:10.302909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:42.370 [2024-07-15 09:51:10.302924] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:42.370 [2024-07-15 09:51:10.302963] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:42.370 [2024-07-15 09:51:10.302972] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:42.370 [2024-07-15 09:51:10.302980] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:42.370 [2024-07-15 09:51:10.302988] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:42.370 [2024-07-15 09:51:10.302993] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2cd5a0635680 name raid_bdev1, state configuring 00:24:42.370 request: 00:24:42.370 { 00:24:42.370 "name": "raid_bdev1", 00:24:42.370 "raid_level": "raid0", 00:24:42.370 "base_bdevs": [ 00:24:42.370 "malloc1", 00:24:42.370 "malloc2", 00:24:42.370 "malloc3", 00:24:42.370 "malloc4" 00:24:42.370 ], 00:24:42.370 "strip_size_kb": 64, 00:24:42.370 "superblock": false, 00:24:42.370 "method": "bdev_raid_create", 00:24:42.370 "req_id": 1 00:24:42.370 } 00:24:42.370 Got JSON-RPC error response 00:24:42.370 response: 00:24:42.370 { 00:24:42.370 "code": -17, 00:24:42.370 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:42.370 } 00:24:42.370 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:42.370 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:42.370 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:42.370 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:42.370 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:42.370 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:42.629 [2024-07-15 09:51:10.682240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:42.629 [2024-07-15 09:51:10.682282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.629 [2024-07-15 09:51:10.682292] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0635180 00:24:42.629 [2024-07-15 09:51:10.682299] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.629 [2024-07-15 09:51:10.683067] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.629 [2024-07-15 09:51:10.683096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:42.629 [2024-07-15 09:51:10.683117] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:42.629 [2024-07-15 09:51:10.683129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:42.629 pt1 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.629 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.888 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.888 "name": "raid_bdev1", 00:24:42.888 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:42.888 "strip_size_kb": 64, 00:24:42.888 "state": "configuring", 00:24:42.888 "raid_level": "raid0", 00:24:42.888 "superblock": true, 00:24:42.888 "num_base_bdevs": 4, 00:24:42.888 "num_base_bdevs_discovered": 1, 00:24:42.888 "num_base_bdevs_operational": 4, 00:24:42.888 "base_bdevs_list": [ 00:24:42.888 { 00:24:42.888 "name": "pt1", 00:24:42.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:42.888 "is_configured": true, 00:24:42.888 "data_offset": 2048, 00:24:42.888 "data_size": 63488 00:24:42.888 }, 00:24:42.888 { 00:24:42.888 "name": null, 00:24:42.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:42.888 "is_configured": false, 00:24:42.888 "data_offset": 2048, 00:24:42.888 "data_size": 63488 00:24:42.888 }, 00:24:42.888 { 00:24:42.888 "name": null, 00:24:42.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:42.888 "is_configured": false, 00:24:42.888 "data_offset": 2048, 00:24:42.888 "data_size": 63488 00:24:42.888 }, 00:24:42.888 { 00:24:42.888 "name": null, 00:24:42.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:42.888 "is_configured": false, 00:24:42.888 "data_offset": 2048, 00:24:42.888 "data_size": 63488 00:24:42.888 } 00:24:42.888 ] 00:24:42.888 }' 00:24:42.888 09:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.888 09:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.149 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:43.149 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:43.408 [2024-07-15 09:51:11.390296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:43.408 [2024-07-15 09:51:11.390332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.408 [2024-07-15 09:51:11.390341] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0634780 00:24:43.408 [2024-07-15 09:51:11.390349] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.408 [2024-07-15 09:51:11.390450] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.408 [2024-07-15 09:51:11.390458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:43.408 [2024-07-15 09:51:11.390491] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:43.408 [2024-07-15 09:51:11.390499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:43.408 pt2 00:24:43.408 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:43.666 [2024-07-15 09:51:11.598305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.666 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.925 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:43.925 "name": "raid_bdev1", 00:24:43.925 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:43.925 "strip_size_kb": 64, 00:24:43.925 "state": "configuring", 00:24:43.925 "raid_level": "raid0", 00:24:43.925 "superblock": true, 00:24:43.925 "num_base_bdevs": 4, 00:24:43.925 "num_base_bdevs_discovered": 1, 00:24:43.925 "num_base_bdevs_operational": 4, 00:24:43.925 "base_bdevs_list": [ 00:24:43.925 { 00:24:43.925 "name": "pt1", 00:24:43.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:43.925 "is_configured": true, 00:24:43.925 "data_offset": 2048, 00:24:43.925 "data_size": 63488 00:24:43.925 }, 00:24:43.925 { 00:24:43.925 "name": null, 00:24:43.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:43.925 "is_configured": false, 00:24:43.925 "data_offset": 2048, 00:24:43.925 "data_size": 63488 00:24:43.925 }, 00:24:43.925 { 00:24:43.925 "name": null, 00:24:43.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:43.925 "is_configured": false, 00:24:43.925 "data_offset": 2048, 00:24:43.925 "data_size": 63488 00:24:43.925 }, 00:24:43.925 { 00:24:43.925 "name": null, 00:24:43.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:43.925 "is_configured": false, 00:24:43.925 "data_offset": 2048, 00:24:43.925 "data_size": 63488 00:24:43.925 } 00:24:43.925 ] 00:24:43.925 }' 00:24:43.925 09:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:43.925 09:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.184 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:44.184 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:44.184 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:44.442 [2024-07-15 09:51:12.358353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:44.442 [2024-07-15 09:51:12.358384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.442 [2024-07-15 09:51:12.358392] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0634780 00:24:44.442 [2024-07-15 09:51:12.358398] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.442 [2024-07-15 09:51:12.358476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.442 [2024-07-15 09:51:12.358483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:44.442 [2024-07-15 09:51:12.358497] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:44.442 [2024-07-15 09:51:12.358503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:44.442 pt2 00:24:44.442 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:44.442 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:44.442 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:44.700 [2024-07-15 09:51:12.614368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:44.700 [2024-07-15 09:51:12.614392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.700 [2024-07-15 09:51:12.614400] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0635b80 00:24:44.700 [2024-07-15 09:51:12.614406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.700 [2024-07-15 09:51:12.614473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.700 [2024-07-15 09:51:12.614480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:44.700 [2024-07-15 09:51:12.614492] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:44.700 [2024-07-15 09:51:12.614498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:44.700 pt3 00:24:44.700 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:44.700 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:44.700 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:44.958 [2024-07-15 09:51:12.854397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:44.958 [2024-07-15 09:51:12.854422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.958 [2024-07-15 09:51:12.854431] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2cd5a0635900 00:24:44.958 [2024-07-15 09:51:12.854437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.958 [2024-07-15 09:51:12.854499] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.958 [2024-07-15 09:51:12.854506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:44.958 [2024-07-15 09:51:12.854518] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:44.958 [2024-07-15 09:51:12.854523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:44.958 [2024-07-15 09:51:12.854544] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2cd5a0634c80 00:24:44.958 [2024-07-15 09:51:12.854548] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:44.958 [2024-07-15 09:51:12.854565] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2cd5a0697e20 00:24:44.958 [2024-07-15 09:51:12.854608] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2cd5a0634c80 00:24:44.958 [2024-07-15 09:51:12.854611] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2cd5a0634c80 00:24:44.958 [2024-07-15 09:51:12.854633] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.958 pt4 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.958 09:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.237 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.237 "name": "raid_bdev1", 00:24:45.237 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:45.237 "strip_size_kb": 64, 00:24:45.237 "state": "online", 00:24:45.237 "raid_level": "raid0", 00:24:45.237 "superblock": true, 00:24:45.237 "num_base_bdevs": 4, 00:24:45.237 "num_base_bdevs_discovered": 4, 00:24:45.237 "num_base_bdevs_operational": 4, 00:24:45.237 "base_bdevs_list": [ 00:24:45.237 { 00:24:45.237 "name": "pt1", 00:24:45.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:45.237 "is_configured": true, 00:24:45.237 "data_offset": 2048, 00:24:45.237 "data_size": 63488 00:24:45.237 }, 00:24:45.237 { 00:24:45.237 "name": "pt2", 00:24:45.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:45.237 "is_configured": true, 00:24:45.237 "data_offset": 2048, 00:24:45.237 "data_size": 63488 00:24:45.237 }, 00:24:45.237 { 00:24:45.237 "name": "pt3", 00:24:45.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:45.237 "is_configured": true, 00:24:45.237 "data_offset": 2048, 00:24:45.237 "data_size": 63488 00:24:45.237 }, 00:24:45.237 { 00:24:45.237 "name": "pt4", 00:24:45.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:45.237 "is_configured": true, 00:24:45.237 "data_offset": 2048, 00:24:45.237 "data_size": 63488 00:24:45.237 } 00:24:45.237 ] 00:24:45.237 }' 00:24:45.237 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.237 09:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:45.496 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:45.754 [2024-07-15 09:51:13.630532] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.754 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:45.754 "name": "raid_bdev1", 00:24:45.754 "aliases": [ 00:24:45.754 "c159cdcb-428f-11ef-a0af-c98d8ee52a94" 00:24:45.754 ], 00:24:45.754 "product_name": "Raid Volume", 00:24:45.754 "block_size": 512, 00:24:45.754 "num_blocks": 253952, 00:24:45.754 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:45.754 "assigned_rate_limits": { 00:24:45.754 "rw_ios_per_sec": 0, 00:24:45.754 "rw_mbytes_per_sec": 0, 00:24:45.754 "r_mbytes_per_sec": 0, 00:24:45.754 "w_mbytes_per_sec": 0 00:24:45.754 }, 00:24:45.754 "claimed": false, 00:24:45.754 "zoned": false, 00:24:45.754 "supported_io_types": { 00:24:45.754 "read": true, 00:24:45.754 "write": true, 00:24:45.754 "unmap": true, 00:24:45.754 "flush": true, 00:24:45.754 "reset": true, 00:24:45.754 "nvme_admin": false, 00:24:45.754 "nvme_io": false, 00:24:45.754 "nvme_io_md": false, 00:24:45.754 "write_zeroes": true, 00:24:45.754 "zcopy": false, 00:24:45.754 "get_zone_info": false, 00:24:45.754 "zone_management": false, 00:24:45.754 "zone_append": false, 00:24:45.754 "compare": false, 00:24:45.754 "compare_and_write": false, 00:24:45.754 "abort": false, 00:24:45.754 "seek_hole": false, 00:24:45.754 "seek_data": false, 00:24:45.754 "copy": false, 00:24:45.754 "nvme_iov_md": false 00:24:45.754 }, 00:24:45.754 "memory_domains": [ 00:24:45.754 { 00:24:45.754 "dma_device_id": "system", 00:24:45.754 "dma_device_type": 1 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.754 "dma_device_type": 2 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "system", 00:24:45.754 "dma_device_type": 1 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.754 "dma_device_type": 2 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "system", 00:24:45.754 "dma_device_type": 1 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.754 "dma_device_type": 2 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "system", 00:24:45.754 "dma_device_type": 1 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.754 "dma_device_type": 2 00:24:45.754 } 00:24:45.754 ], 00:24:45.754 "driver_specific": { 00:24:45.754 "raid": { 00:24:45.754 "uuid": "c159cdcb-428f-11ef-a0af-c98d8ee52a94", 00:24:45.754 "strip_size_kb": 64, 00:24:45.754 "state": "online", 00:24:45.754 "raid_level": "raid0", 00:24:45.754 "superblock": true, 00:24:45.754 "num_base_bdevs": 4, 00:24:45.754 "num_base_bdevs_discovered": 4, 00:24:45.754 "num_base_bdevs_operational": 4, 00:24:45.754 "base_bdevs_list": [ 00:24:45.754 { 00:24:45.754 "name": "pt1", 00:24:45.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:45.754 "is_configured": true, 00:24:45.754 "data_offset": 2048, 00:24:45.754 "data_size": 63488 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "name": "pt2", 00:24:45.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:45.754 "is_configured": true, 00:24:45.754 "data_offset": 2048, 00:24:45.754 "data_size": 63488 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "name": "pt3", 00:24:45.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:45.754 "is_configured": true, 00:24:45.754 "data_offset": 2048, 00:24:45.754 "data_size": 63488 00:24:45.754 }, 00:24:45.754 { 00:24:45.754 "name": "pt4", 00:24:45.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:45.754 "is_configured": true, 00:24:45.754 "data_offset": 2048, 00:24:45.754 "data_size": 63488 00:24:45.754 } 00:24:45.754 ] 00:24:45.754 } 00:24:45.754 } 00:24:45.754 }' 00:24:45.754 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:45.754 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:45.754 pt2 00:24:45.754 pt3 00:24:45.754 pt4' 00:24:45.754 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.754 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:45.754 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.012 "name": "pt1", 00:24:46.012 "aliases": [ 00:24:46.012 "00000000-0000-0000-0000-000000000001" 00:24:46.012 ], 00:24:46.012 "product_name": "passthru", 00:24:46.012 "block_size": 512, 00:24:46.012 "num_blocks": 65536, 00:24:46.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:46.012 "assigned_rate_limits": { 00:24:46.012 "rw_ios_per_sec": 0, 00:24:46.012 "rw_mbytes_per_sec": 0, 00:24:46.012 "r_mbytes_per_sec": 0, 00:24:46.012 "w_mbytes_per_sec": 0 00:24:46.012 }, 00:24:46.012 "claimed": true, 00:24:46.012 "claim_type": "exclusive_write", 00:24:46.012 "zoned": false, 00:24:46.012 "supported_io_types": { 00:24:46.012 "read": true, 00:24:46.012 "write": true, 00:24:46.012 "unmap": true, 00:24:46.012 "flush": true, 00:24:46.012 "reset": true, 00:24:46.012 "nvme_admin": false, 00:24:46.012 "nvme_io": false, 00:24:46.012 "nvme_io_md": false, 00:24:46.012 "write_zeroes": true, 00:24:46.012 "zcopy": true, 00:24:46.012 "get_zone_info": false, 00:24:46.012 "zone_management": false, 00:24:46.012 "zone_append": false, 00:24:46.012 "compare": false, 00:24:46.012 "compare_and_write": false, 00:24:46.012 "abort": true, 00:24:46.012 "seek_hole": false, 00:24:46.012 "seek_data": false, 00:24:46.012 "copy": true, 00:24:46.012 "nvme_iov_md": false 00:24:46.012 }, 00:24:46.012 "memory_domains": [ 00:24:46.012 { 00:24:46.012 "dma_device_id": "system", 00:24:46.012 "dma_device_type": 1 00:24:46.012 }, 00:24:46.012 { 00:24:46.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.012 "dma_device_type": 2 00:24:46.012 } 00:24:46.012 ], 00:24:46.012 "driver_specific": { 00:24:46.012 "passthru": { 00:24:46.012 "name": "pt1", 00:24:46.012 "base_bdev_name": "malloc1" 00:24:46.012 } 00:24:46.012 } 00:24:46.012 }' 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:46.012 09:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.271 "name": "pt2", 00:24:46.271 "aliases": [ 00:24:46.271 "00000000-0000-0000-0000-000000000002" 00:24:46.271 ], 00:24:46.271 "product_name": "passthru", 00:24:46.271 "block_size": 512, 00:24:46.271 "num_blocks": 65536, 00:24:46.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:46.271 "assigned_rate_limits": { 00:24:46.271 "rw_ios_per_sec": 0, 00:24:46.271 "rw_mbytes_per_sec": 0, 00:24:46.271 "r_mbytes_per_sec": 0, 00:24:46.271 "w_mbytes_per_sec": 0 00:24:46.271 }, 00:24:46.271 "claimed": true, 00:24:46.271 "claim_type": "exclusive_write", 00:24:46.271 "zoned": false, 00:24:46.271 "supported_io_types": { 00:24:46.271 "read": true, 00:24:46.271 "write": true, 00:24:46.271 "unmap": true, 00:24:46.271 "flush": true, 00:24:46.271 "reset": true, 00:24:46.271 "nvme_admin": false, 00:24:46.271 "nvme_io": false, 00:24:46.271 "nvme_io_md": false, 00:24:46.271 "write_zeroes": true, 00:24:46.271 "zcopy": true, 00:24:46.271 "get_zone_info": false, 00:24:46.271 "zone_management": false, 00:24:46.271 "zone_append": false, 00:24:46.271 "compare": false, 00:24:46.271 "compare_and_write": false, 00:24:46.271 "abort": true, 00:24:46.271 "seek_hole": false, 00:24:46.271 "seek_data": false, 00:24:46.271 "copy": true, 00:24:46.271 "nvme_iov_md": false 00:24:46.271 }, 00:24:46.271 "memory_domains": [ 00:24:46.271 { 00:24:46.271 "dma_device_id": "system", 00:24:46.271 "dma_device_type": 1 00:24:46.271 }, 00:24:46.271 { 00:24:46.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.271 "dma_device_type": 2 00:24:46.271 } 00:24:46.271 ], 00:24:46.271 "driver_specific": { 00:24:46.271 "passthru": { 00:24:46.271 "name": "pt2", 00:24:46.271 "base_bdev_name": "malloc2" 00:24:46.271 } 00:24:46.271 } 00:24:46.271 }' 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:46.271 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:46.272 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.531 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.531 "name": "pt3", 00:24:46.531 "aliases": [ 00:24:46.531 "00000000-0000-0000-0000-000000000003" 00:24:46.531 ], 00:24:46.531 "product_name": "passthru", 00:24:46.531 "block_size": 512, 00:24:46.531 "num_blocks": 65536, 00:24:46.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:46.531 "assigned_rate_limits": { 00:24:46.531 "rw_ios_per_sec": 0, 00:24:46.531 "rw_mbytes_per_sec": 0, 00:24:46.531 "r_mbytes_per_sec": 0, 00:24:46.531 "w_mbytes_per_sec": 0 00:24:46.531 }, 00:24:46.531 "claimed": true, 00:24:46.531 "claim_type": "exclusive_write", 00:24:46.531 "zoned": false, 00:24:46.531 "supported_io_types": { 00:24:46.531 "read": true, 00:24:46.531 "write": true, 00:24:46.531 "unmap": true, 00:24:46.531 "flush": true, 00:24:46.531 "reset": true, 00:24:46.531 "nvme_admin": false, 00:24:46.531 "nvme_io": false, 00:24:46.531 "nvme_io_md": false, 00:24:46.531 "write_zeroes": true, 00:24:46.531 "zcopy": true, 00:24:46.531 "get_zone_info": false, 00:24:46.531 "zone_management": false, 00:24:46.531 "zone_append": false, 00:24:46.531 "compare": false, 00:24:46.531 "compare_and_write": false, 00:24:46.531 "abort": true, 00:24:46.531 "seek_hole": false, 00:24:46.531 "seek_data": false, 00:24:46.531 "copy": true, 00:24:46.531 "nvme_iov_md": false 00:24:46.531 }, 00:24:46.531 "memory_domains": [ 00:24:46.531 { 00:24:46.531 "dma_device_id": "system", 00:24:46.531 "dma_device_type": 1 00:24:46.531 }, 00:24:46.531 { 00:24:46.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.531 "dma_device_type": 2 00:24:46.531 } 00:24:46.531 ], 00:24:46.531 "driver_specific": { 00:24:46.531 "passthru": { 00:24:46.531 "name": "pt3", 00:24:46.531 "base_bdev_name": "malloc3" 00:24:46.531 } 00:24:46.531 } 00:24:46.531 }' 00:24:46.531 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.531 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.531 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:46.531 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.531 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.791 "name": "pt4", 00:24:46.791 "aliases": [ 00:24:46.791 "00000000-0000-0000-0000-000000000004" 00:24:46.791 ], 00:24:46.791 "product_name": "passthru", 00:24:46.791 "block_size": 512, 00:24:46.791 "num_blocks": 65536, 00:24:46.791 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:46.791 "assigned_rate_limits": { 00:24:46.791 "rw_ios_per_sec": 0, 00:24:46.791 "rw_mbytes_per_sec": 0, 00:24:46.791 "r_mbytes_per_sec": 0, 00:24:46.791 "w_mbytes_per_sec": 0 00:24:46.791 }, 00:24:46.791 "claimed": true, 00:24:46.791 "claim_type": "exclusive_write", 00:24:46.791 "zoned": false, 00:24:46.791 "supported_io_types": { 00:24:46.791 "read": true, 00:24:46.791 "write": true, 00:24:46.791 "unmap": true, 00:24:46.791 "flush": true, 00:24:46.791 "reset": true, 00:24:46.791 "nvme_admin": false, 00:24:46.791 "nvme_io": false, 00:24:46.791 "nvme_io_md": false, 00:24:46.791 "write_zeroes": true, 00:24:46.791 "zcopy": true, 00:24:46.791 "get_zone_info": false, 00:24:46.791 "zone_management": false, 00:24:46.791 "zone_append": false, 00:24:46.791 "compare": false, 00:24:46.791 "compare_and_write": false, 00:24:46.791 "abort": true, 00:24:46.791 "seek_hole": false, 00:24:46.791 "seek_data": false, 00:24:46.791 "copy": true, 00:24:46.791 "nvme_iov_md": false 00:24:46.791 }, 00:24:46.791 "memory_domains": [ 00:24:46.791 { 00:24:46.791 "dma_device_id": "system", 00:24:46.791 "dma_device_type": 1 00:24:46.791 }, 00:24:46.791 { 00:24:46.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.791 "dma_device_type": 2 00:24:46.791 } 00:24:46.791 ], 00:24:46.791 "driver_specific": { 00:24:46.791 "passthru": { 00:24:46.791 "name": "pt4", 00:24:46.791 "base_bdev_name": "malloc4" 00:24:46.791 } 00:24:46.791 } 00:24:46.791 }' 00:24:46.791 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:47.051 09:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:47.051 [2024-07-15 09:51:15.154598] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c159cdcb-428f-11ef-a0af-c98d8ee52a94 '!=' c159cdcb-428f-11ef-a0af-c98d8ee52a94 ']' 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 59848 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 59848 ']' 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 59848 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 59848 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:24:47.310 killing process with pid 59848 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59848' 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 59848 00:24:47.310 [2024-07-15 09:51:15.185673] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:47.310 [2024-07-15 09:51:15.185703] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:47.310 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 59848 00:24:47.310 [2024-07-15 09:51:15.185721] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:47.310 [2024-07-15 09:51:15.185725] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2cd5a0634c80 name raid_bdev1, state offline 00:24:47.310 [2024-07-15 09:51:15.220458] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:47.570 09:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:47.570 00:24:47.570 real 0m11.988s 00:24:47.570 user 0m21.089s 00:24:47.570 sys 0m2.066s 00:24:47.570 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.570 ************************************ 00:24:47.570 END TEST raid_superblock_test 00:24:47.570 ************************************ 00:24:47.570 09:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.570 09:51:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:47.570 09:51:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:24:47.570 09:51:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:47.570 09:51:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.570 09:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:47.570 ************************************ 00:24:47.570 START TEST raid_read_error_test 00:24:47.570 ************************************ 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:24:47.570 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ifAVDZU3Vn 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60245 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60245 /var/tmp/spdk-raid.sock 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60245 ']' 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.571 09:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.571 [2024-07-15 09:51:15.585723] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:47.571 [2024-07-15 09:51:15.586057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:24:48.139 EAL: TSC is not safe to use in SMP mode 00:24:48.139 EAL: TSC is not invariant 00:24:48.139 [2024-07-15 09:51:16.026904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.139 [2024-07-15 09:51:16.143955] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:24:48.139 [2024-07-15 09:51:16.146375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.139 [2024-07-15 09:51:16.147120] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.139 [2024-07-15 09:51:16.147131] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.716 09:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.716 09:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:48.716 09:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:48.716 09:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:48.716 BaseBdev1_malloc 00:24:48.716 09:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:48.975 true 00:24:48.975 09:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:49.234 [2024-07-15 09:51:17.158153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:49.234 [2024-07-15 09:51:17.158243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.234 [2024-07-15 09:51:17.158285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20758f834780 00:24:49.234 [2024-07-15 09:51:17.158299] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.234 [2024-07-15 09:51:17.159106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.234 [2024-07-15 09:51:17.159137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:49.234 BaseBdev1 00:24:49.234 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:49.234 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:49.493 BaseBdev2_malloc 00:24:49.493 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:49.493 true 00:24:49.493 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:49.753 [2024-07-15 09:51:17.758181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:49.753 [2024-07-15 09:51:17.758252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.753 [2024-07-15 09:51:17.758288] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20758f834c80 00:24:49.753 [2024-07-15 09:51:17.758295] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.753 [2024-07-15 09:51:17.759091] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.753 [2024-07-15 09:51:17.759120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:49.753 BaseBdev2 00:24:49.753 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:49.753 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:50.012 BaseBdev3_malloc 00:24:50.012 09:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:50.270 true 00:24:50.270 09:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:50.270 [2024-07-15 09:51:18.338209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:50.270 [2024-07-15 09:51:18.338286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.270 [2024-07-15 09:51:18.338322] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20758f835180 00:24:50.270 [2024-07-15 09:51:18.338330] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.270 [2024-07-15 09:51:18.339115] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.270 [2024-07-15 09:51:18.339145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:50.270 BaseBdev3 00:24:50.270 09:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:50.270 09:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:50.527 BaseBdev4_malloc 00:24:50.527 09:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:50.785 true 00:24:50.785 09:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:51.053 [2024-07-15 09:51:18.950257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:51.053 [2024-07-15 09:51:18.950349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.053 [2024-07-15 09:51:18.950405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20758f835680 00:24:51.053 [2024-07-15 09:51:18.950423] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.053 [2024-07-15 09:51:18.951318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.053 [2024-07-15 09:51:18.951367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:51.053 BaseBdev4 00:24:51.053 09:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:51.311 [2024-07-15 09:51:19.154261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:51.311 [2024-07-15 09:51:19.154975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:51.311 [2024-07-15 09:51:19.155005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:51.311 [2024-07-15 09:51:19.155036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:51.311 [2024-07-15 09:51:19.155114] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20758f835900 00:24:51.311 [2024-07-15 09:51:19.155119] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:51.311 [2024-07-15 09:51:19.155163] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20758f8a0e20 00:24:51.311 [2024-07-15 09:51:19.155242] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20758f835900 00:24:51.311 [2024-07-15 09:51:19.155246] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20758f835900 00:24:51.311 [2024-07-15 09:51:19.155270] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.311 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.311 "name": "raid_bdev1", 00:24:51.311 "uuid": "c8fccec7-428f-11ef-a0af-c98d8ee52a94", 00:24:51.311 "strip_size_kb": 64, 00:24:51.311 "state": "online", 00:24:51.312 "raid_level": "raid0", 00:24:51.312 "superblock": true, 00:24:51.312 "num_base_bdevs": 4, 00:24:51.312 "num_base_bdevs_discovered": 4, 00:24:51.312 "num_base_bdevs_operational": 4, 00:24:51.312 "base_bdevs_list": [ 00:24:51.312 { 00:24:51.312 "name": "BaseBdev1", 00:24:51.312 "uuid": "acc7f5fb-b0d5-c752-9ca0-bf78a16405b4", 00:24:51.312 "is_configured": true, 00:24:51.312 "data_offset": 2048, 00:24:51.312 "data_size": 63488 00:24:51.312 }, 00:24:51.312 { 00:24:51.312 "name": "BaseBdev2", 00:24:51.312 "uuid": "8655005b-3033-8d5f-8ba0-1f7a0dbf20a1", 00:24:51.312 "is_configured": true, 00:24:51.312 "data_offset": 2048, 00:24:51.312 "data_size": 63488 00:24:51.312 }, 00:24:51.312 { 00:24:51.312 "name": "BaseBdev3", 00:24:51.312 "uuid": "8f4388ed-051e-7156-a834-92b659b8e4d2", 00:24:51.312 "is_configured": true, 00:24:51.312 "data_offset": 2048, 00:24:51.312 "data_size": 63488 00:24:51.312 }, 00:24:51.312 { 00:24:51.312 "name": "BaseBdev4", 00:24:51.312 "uuid": "f6f4c505-7293-795c-8bc4-ed8b260f00da", 00:24:51.312 "is_configured": true, 00:24:51.312 "data_offset": 2048, 00:24:51.312 "data_size": 63488 00:24:51.312 } 00:24:51.312 ] 00:24:51.312 }' 00:24:51.312 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.312 09:51:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.569 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:51.569 09:51:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:51.827 [2024-07-15 09:51:19.734382] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20758f8a0ec0 00:24:52.796 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.054 09:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.054 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.054 "name": "raid_bdev1", 00:24:53.054 "uuid": "c8fccec7-428f-11ef-a0af-c98d8ee52a94", 00:24:53.054 "strip_size_kb": 64, 00:24:53.054 "state": "online", 00:24:53.054 "raid_level": "raid0", 00:24:53.054 "superblock": true, 00:24:53.054 "num_base_bdevs": 4, 00:24:53.054 "num_base_bdevs_discovered": 4, 00:24:53.054 "num_base_bdevs_operational": 4, 00:24:53.054 "base_bdevs_list": [ 00:24:53.054 { 00:24:53.054 "name": "BaseBdev1", 00:24:53.054 "uuid": "acc7f5fb-b0d5-c752-9ca0-bf78a16405b4", 00:24:53.054 "is_configured": true, 00:24:53.054 "data_offset": 2048, 00:24:53.054 "data_size": 63488 00:24:53.054 }, 00:24:53.054 { 00:24:53.054 "name": "BaseBdev2", 00:24:53.054 "uuid": "8655005b-3033-8d5f-8ba0-1f7a0dbf20a1", 00:24:53.054 "is_configured": true, 00:24:53.054 "data_offset": 2048, 00:24:53.054 "data_size": 63488 00:24:53.054 }, 00:24:53.054 { 00:24:53.054 "name": "BaseBdev3", 00:24:53.054 "uuid": "8f4388ed-051e-7156-a834-92b659b8e4d2", 00:24:53.054 "is_configured": true, 00:24:53.054 "data_offset": 2048, 00:24:53.054 "data_size": 63488 00:24:53.054 }, 00:24:53.054 { 00:24:53.054 "name": "BaseBdev4", 00:24:53.054 "uuid": "f6f4c505-7293-795c-8bc4-ed8b260f00da", 00:24:53.054 "is_configured": true, 00:24:53.054 "data_offset": 2048, 00:24:53.054 "data_size": 63488 00:24:53.054 } 00:24:53.054 ] 00:24:53.054 }' 00:24:53.054 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.054 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:53.620 [2024-07-15 09:51:21.633281] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:53.620 [2024-07-15 09:51:21.633315] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:53.620 [2024-07-15 09:51:21.633634] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.620 [2024-07-15 09:51:21.633646] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.620 [2024-07-15 09:51:21.633656] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.620 [2024-07-15 09:51:21.633660] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20758f835900 name raid_bdev1, state offline 00:24:53.620 0 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60245 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60245 ']' 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60245 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60245 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:24:53.620 killing process with pid 60245 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60245' 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60245 00:24:53.620 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60245 00:24:53.620 [2024-07-15 09:51:21.666158] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:53.620 [2024-07-15 09:51:21.700429] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ifAVDZU3Vn 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:24:53.885 00:24:53.885 real 0m6.410s 00:24:53.885 user 0m9.870s 00:24:53.885 sys 0m1.089s 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.885 09:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.885 ************************************ 00:24:53.885 END TEST raid_read_error_test 00:24:53.885 ************************************ 00:24:54.143 09:51:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:54.143 09:51:22 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:24:54.143 09:51:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:54.143 09:51:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.143 09:51:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:54.143 ************************************ 00:24:54.143 START TEST raid_write_error_test 00:24:54.143 ************************************ 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:24:54.143 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6xT9smf5Ww 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60379 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60379 /var/tmp/spdk-raid.sock 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60379 ']' 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.144 09:51:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 [2024-07-15 09:51:22.050553] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:54.144 [2024-07-15 09:51:22.050876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:24:54.711 EAL: TSC is not safe to use in SMP mode 00:24:54.711 EAL: TSC is not invariant 00:24:54.711 [2024-07-15 09:51:22.759811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.975 [2024-07-15 09:51:22.872573] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:24:54.975 [2024-07-15 09:51:22.874984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.975 [2024-07-15 09:51:22.875673] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.975 [2024-07-15 09:51:22.875685] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.975 09:51:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.975 09:51:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:54.975 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:54.975 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:55.239 BaseBdev1_malloc 00:24:55.239 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:55.499 true 00:24:55.499 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:55.778 [2024-07-15 09:51:23.706553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:55.778 [2024-07-15 09:51:23.706620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.778 [2024-07-15 09:51:23.706651] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130779a34780 00:24:55.778 [2024-07-15 09:51:23.706658] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.778 [2024-07-15 09:51:23.707380] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.778 [2024-07-15 09:51:23.707409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:55.778 BaseBdev1 00:24:55.778 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:55.778 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:56.071 BaseBdev2_malloc 00:24:56.071 09:51:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:56.071 true 00:24:56.071 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:56.331 [2024-07-15 09:51:24.286587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:56.331 [2024-07-15 09:51:24.286669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.331 [2024-07-15 09:51:24.286697] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130779a34c80 00:24:56.331 [2024-07-15 09:51:24.286704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.331 [2024-07-15 09:51:24.287405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.331 [2024-07-15 09:51:24.287435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:56.331 BaseBdev2 00:24:56.331 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:56.331 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:56.591 BaseBdev3_malloc 00:24:56.591 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:56.851 true 00:24:56.851 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:56.851 [2024-07-15 09:51:24.882625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:56.851 [2024-07-15 09:51:24.882692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.851 [2024-07-15 09:51:24.882723] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130779a35180 00:24:56.851 [2024-07-15 09:51:24.882731] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.851 [2024-07-15 09:51:24.883397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.851 [2024-07-15 09:51:24.883426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:56.851 BaseBdev3 00:24:56.851 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:56.851 09:51:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:57.110 BaseBdev4_malloc 00:24:57.110 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:57.369 true 00:24:57.369 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:57.369 [2024-07-15 09:51:25.466635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:57.369 [2024-07-15 09:51:25.466700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.369 [2024-07-15 09:51:25.466742] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130779a35680 00:24:57.369 [2024-07-15 09:51:25.466749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.369 [2024-07-15 09:51:25.467388] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.369 [2024-07-15 09:51:25.467417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:57.369 BaseBdev4 00:24:57.628 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:57.628 [2024-07-15 09:51:25.666666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.628 [2024-07-15 09:51:25.667268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.628 [2024-07-15 09:51:25.667294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.628 [2024-07-15 09:51:25.667308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:57.628 [2024-07-15 09:51:25.667370] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x130779a35900 00:24:57.628 [2024-07-15 09:51:25.667376] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:57.628 [2024-07-15 09:51:25.667417] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x130779aa0e20 00:24:57.628 [2024-07-15 09:51:25.667485] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x130779a35900 00:24:57.628 [2024-07-15 09:51:25.667489] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x130779a35900 00:24:57.628 [2024-07-15 09:51:25.667508] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.628 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:57.628 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.629 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.888 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.888 "name": "raid_bdev1", 00:24:57.888 "uuid": "ccde85d5-428f-11ef-a0af-c98d8ee52a94", 00:24:57.888 "strip_size_kb": 64, 00:24:57.888 "state": "online", 00:24:57.888 "raid_level": "raid0", 00:24:57.888 "superblock": true, 00:24:57.888 "num_base_bdevs": 4, 00:24:57.888 "num_base_bdevs_discovered": 4, 00:24:57.888 "num_base_bdevs_operational": 4, 00:24:57.888 "base_bdevs_list": [ 00:24:57.888 { 00:24:57.888 "name": "BaseBdev1", 00:24:57.888 "uuid": "35777770-deaa-765a-872e-de06fec9d0d9", 00:24:57.888 "is_configured": true, 00:24:57.888 "data_offset": 2048, 00:24:57.888 "data_size": 63488 00:24:57.888 }, 00:24:57.888 { 00:24:57.888 "name": "BaseBdev2", 00:24:57.888 "uuid": "9e578395-718f-695b-b90a-564a6b7d0806", 00:24:57.888 "is_configured": true, 00:24:57.888 "data_offset": 2048, 00:24:57.888 "data_size": 63488 00:24:57.888 }, 00:24:57.888 { 00:24:57.888 "name": "BaseBdev3", 00:24:57.888 "uuid": "3a47c9de-19ec-e859-b9c2-2e2dc0aea735", 00:24:57.888 "is_configured": true, 00:24:57.888 "data_offset": 2048, 00:24:57.888 "data_size": 63488 00:24:57.888 }, 00:24:57.888 { 00:24:57.888 "name": "BaseBdev4", 00:24:57.889 "uuid": "30cd434a-10e8-2654-a0a3-51319bf615ed", 00:24:57.889 "is_configured": true, 00:24:57.889 "data_offset": 2048, 00:24:57.889 "data_size": 63488 00:24:57.889 } 00:24:57.889 ] 00:24:57.889 }' 00:24:57.889 09:51:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.889 09:51:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.148 09:51:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:58.148 09:51:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:58.148 [2024-07-15 09:51:26.242760] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x130779aa0ec0 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.528 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.788 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.788 "name": "raid_bdev1", 00:24:59.788 "uuid": "ccde85d5-428f-11ef-a0af-c98d8ee52a94", 00:24:59.788 "strip_size_kb": 64, 00:24:59.788 "state": "online", 00:24:59.788 "raid_level": "raid0", 00:24:59.788 "superblock": true, 00:24:59.788 "num_base_bdevs": 4, 00:24:59.788 "num_base_bdevs_discovered": 4, 00:24:59.788 "num_base_bdevs_operational": 4, 00:24:59.788 "base_bdevs_list": [ 00:24:59.788 { 00:24:59.788 "name": "BaseBdev1", 00:24:59.788 "uuid": "35777770-deaa-765a-872e-de06fec9d0d9", 00:24:59.788 "is_configured": true, 00:24:59.788 "data_offset": 2048, 00:24:59.788 "data_size": 63488 00:24:59.788 }, 00:24:59.788 { 00:24:59.788 "name": "BaseBdev2", 00:24:59.788 "uuid": "9e578395-718f-695b-b90a-564a6b7d0806", 00:24:59.788 "is_configured": true, 00:24:59.788 "data_offset": 2048, 00:24:59.788 "data_size": 63488 00:24:59.788 }, 00:24:59.788 { 00:24:59.788 "name": "BaseBdev3", 00:24:59.788 "uuid": "3a47c9de-19ec-e859-b9c2-2e2dc0aea735", 00:24:59.788 "is_configured": true, 00:24:59.788 "data_offset": 2048, 00:24:59.788 "data_size": 63488 00:24:59.788 }, 00:24:59.788 { 00:24:59.788 "name": "BaseBdev4", 00:24:59.788 "uuid": "30cd434a-10e8-2654-a0a3-51319bf615ed", 00:24:59.788 "is_configured": true, 00:24:59.788 "data_offset": 2048, 00:24:59.788 "data_size": 63488 00:24:59.788 } 00:24:59.788 ] 00:24:59.788 }' 00:24:59.788 09:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.788 09:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:00.307 [2024-07-15 09:51:28.206171] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.307 [2024-07-15 09:51:28.206208] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.307 [2024-07-15 09:51:28.206551] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.307 [2024-07-15 09:51:28.206561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.307 [2024-07-15 09:51:28.206571] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.307 [2024-07-15 09:51:28.206576] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x130779a35900 name raid_bdev1, state offline 00:25:00.307 0 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60379 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60379 ']' 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60379 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60379 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:25:00.307 killing process with pid 60379 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60379' 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60379 00:25:00.307 [2024-07-15 09:51:28.239887] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.307 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60379 00:25:00.307 [2024-07-15 09:51:28.273273] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.566 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6xT9smf5Ww 00:25:00.566 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:00.566 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:00.566 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:25:00.567 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:25:00.567 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:00.567 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:00.567 09:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:25:00.567 00:25:00.567 real 0m6.510s 00:25:00.567 user 0m9.824s 00:25:00.567 sys 0m1.363s 00:25:00.567 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.567 ************************************ 00:25:00.567 09:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.567 END TEST raid_write_error_test 00:25:00.567 ************************************ 00:25:00.567 09:51:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:00.567 09:51:28 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:25:00.567 09:51:28 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:25:00.567 09:51:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:00.567 09:51:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.567 09:51:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.567 ************************************ 00:25:00.567 START TEST raid_state_function_test 00:25:00.567 ************************************ 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60511 00:25:00.567 Process raid pid: 60511 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60511' 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60511 /var/tmp/spdk-raid.sock 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60511 ']' 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:00.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.567 09:51:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.567 [2024-07-15 09:51:28.622692] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:00.567 [2024-07-15 09:51:28.622959] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:25:01.503 EAL: TSC is not safe to use in SMP mode 00:25:01.503 EAL: TSC is not invariant 00:25:01.503 [2024-07-15 09:51:29.337044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.503 [2024-07-15 09:51:29.450918] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:01.503 [2024-07-15 09:51:29.453320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.503 [2024-07-15 09:51:29.454016] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.503 [2024-07-15 09:51:29.454028] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.503 09:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.503 09:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:25:01.503 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:01.760 [2024-07-15 09:51:29.700906] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:01.760 [2024-07-15 09:51:29.700966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:01.760 [2024-07-15 09:51:29.700970] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:01.760 [2024-07-15 09:51:29.700977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:01.760 [2024-07-15 09:51:29.700980] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:01.760 [2024-07-15 09:51:29.700986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:01.760 [2024-07-15 09:51:29.700988] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:01.760 [2024-07-15 09:51:29.700994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.760 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.761 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.761 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.018 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.018 "name": "Existed_Raid", 00:25:02.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.018 "strip_size_kb": 64, 00:25:02.018 "state": "configuring", 00:25:02.018 "raid_level": "concat", 00:25:02.018 "superblock": false, 00:25:02.018 "num_base_bdevs": 4, 00:25:02.018 "num_base_bdevs_discovered": 0, 00:25:02.018 "num_base_bdevs_operational": 4, 00:25:02.018 "base_bdevs_list": [ 00:25:02.018 { 00:25:02.018 "name": "BaseBdev1", 00:25:02.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.018 "is_configured": false, 00:25:02.018 "data_offset": 0, 00:25:02.018 "data_size": 0 00:25:02.018 }, 00:25:02.018 { 00:25:02.018 "name": "BaseBdev2", 00:25:02.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.018 "is_configured": false, 00:25:02.018 "data_offset": 0, 00:25:02.018 "data_size": 0 00:25:02.018 }, 00:25:02.018 { 00:25:02.018 "name": "BaseBdev3", 00:25:02.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.018 "is_configured": false, 00:25:02.018 "data_offset": 0, 00:25:02.018 "data_size": 0 00:25:02.018 }, 00:25:02.018 { 00:25:02.018 "name": "BaseBdev4", 00:25:02.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.018 "is_configured": false, 00:25:02.018 "data_offset": 0, 00:25:02.018 "data_size": 0 00:25:02.018 } 00:25:02.018 ] 00:25:02.018 }' 00:25:02.018 09:51:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.018 09:51:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.277 09:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:02.536 [2024-07-15 09:51:30.480974] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:02.536 [2024-07-15 09:51:30.481006] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2348dee34500 name Existed_Raid, state configuring 00:25:02.536 09:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:02.795 [2024-07-15 09:51:30.704987] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:02.795 [2024-07-15 09:51:30.705037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:02.795 [2024-07-15 09:51:30.705041] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.795 [2024-07-15 09:51:30.705047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.795 [2024-07-15 09:51:30.705050] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:02.795 [2024-07-15 09:51:30.705056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:02.795 [2024-07-15 09:51:30.705059] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:02.795 [2024-07-15 09:51:30.705065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:02.795 09:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:03.055 [2024-07-15 09:51:30.926225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.055 BaseBdev1 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:03.055 09:51:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:03.315 09:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:03.575 [ 00:25:03.575 { 00:25:03.575 "name": "BaseBdev1", 00:25:03.575 "aliases": [ 00:25:03.575 "d000e28a-428f-11ef-a0af-c98d8ee52a94" 00:25:03.575 ], 00:25:03.575 "product_name": "Malloc disk", 00:25:03.575 "block_size": 512, 00:25:03.575 "num_blocks": 65536, 00:25:03.575 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:03.575 "assigned_rate_limits": { 00:25:03.575 "rw_ios_per_sec": 0, 00:25:03.575 "rw_mbytes_per_sec": 0, 00:25:03.575 "r_mbytes_per_sec": 0, 00:25:03.575 "w_mbytes_per_sec": 0 00:25:03.575 }, 00:25:03.575 "claimed": true, 00:25:03.575 "claim_type": "exclusive_write", 00:25:03.575 "zoned": false, 00:25:03.575 "supported_io_types": { 00:25:03.575 "read": true, 00:25:03.575 "write": true, 00:25:03.575 "unmap": true, 00:25:03.575 "flush": true, 00:25:03.575 "reset": true, 00:25:03.575 "nvme_admin": false, 00:25:03.575 "nvme_io": false, 00:25:03.575 "nvme_io_md": false, 00:25:03.575 "write_zeroes": true, 00:25:03.575 "zcopy": true, 00:25:03.575 "get_zone_info": false, 00:25:03.575 "zone_management": false, 00:25:03.575 "zone_append": false, 00:25:03.575 "compare": false, 00:25:03.575 "compare_and_write": false, 00:25:03.575 "abort": true, 00:25:03.575 "seek_hole": false, 00:25:03.575 "seek_data": false, 00:25:03.575 "copy": true, 00:25:03.575 "nvme_iov_md": false 00:25:03.575 }, 00:25:03.575 "memory_domains": [ 00:25:03.575 { 00:25:03.575 "dma_device_id": "system", 00:25:03.575 "dma_device_type": 1 00:25:03.575 }, 00:25:03.575 { 00:25:03.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.575 "dma_device_type": 2 00:25:03.575 } 00:25:03.575 ], 00:25:03.575 "driver_specific": {} 00:25:03.575 } 00:25:03.575 ] 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.575 "name": "Existed_Raid", 00:25:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.575 "strip_size_kb": 64, 00:25:03.575 "state": "configuring", 00:25:03.575 "raid_level": "concat", 00:25:03.575 "superblock": false, 00:25:03.575 "num_base_bdevs": 4, 00:25:03.575 "num_base_bdevs_discovered": 1, 00:25:03.575 "num_base_bdevs_operational": 4, 00:25:03.575 "base_bdevs_list": [ 00:25:03.575 { 00:25:03.575 "name": "BaseBdev1", 00:25:03.575 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:03.575 "is_configured": true, 00:25:03.575 "data_offset": 0, 00:25:03.575 "data_size": 65536 00:25:03.575 }, 00:25:03.575 { 00:25:03.575 "name": "BaseBdev2", 00:25:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.575 "is_configured": false, 00:25:03.575 "data_offset": 0, 00:25:03.575 "data_size": 0 00:25:03.575 }, 00:25:03.575 { 00:25:03.575 "name": "BaseBdev3", 00:25:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.575 "is_configured": false, 00:25:03.575 "data_offset": 0, 00:25:03.575 "data_size": 0 00:25:03.575 }, 00:25:03.575 { 00:25:03.575 "name": "BaseBdev4", 00:25:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.575 "is_configured": false, 00:25:03.575 "data_offset": 0, 00:25:03.575 "data_size": 0 00:25:03.575 } 00:25:03.575 ] 00:25:03.575 }' 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.575 09:51:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.143 09:51:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:04.143 [2024-07-15 09:51:32.137113] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:04.143 [2024-07-15 09:51:32.137147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2348dee34500 name Existed_Raid, state configuring 00:25:04.144 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:04.402 [2024-07-15 09:51:32.353137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:04.402 [2024-07-15 09:51:32.354057] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:04.402 [2024-07-15 09:51:32.354105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:04.402 [2024-07-15 09:51:32.354110] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:04.402 [2024-07-15 09:51:32.354117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:04.402 [2024-07-15 09:51:32.354120] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:04.402 [2024-07-15 09:51:32.354127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.402 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.661 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.661 "name": "Existed_Raid", 00:25:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.661 "strip_size_kb": 64, 00:25:04.661 "state": "configuring", 00:25:04.661 "raid_level": "concat", 00:25:04.661 "superblock": false, 00:25:04.661 "num_base_bdevs": 4, 00:25:04.661 "num_base_bdevs_discovered": 1, 00:25:04.661 "num_base_bdevs_operational": 4, 00:25:04.661 "base_bdevs_list": [ 00:25:04.661 { 00:25:04.661 "name": "BaseBdev1", 00:25:04.661 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:04.661 "is_configured": true, 00:25:04.661 "data_offset": 0, 00:25:04.661 "data_size": 65536 00:25:04.661 }, 00:25:04.661 { 00:25:04.661 "name": "BaseBdev2", 00:25:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.661 "is_configured": false, 00:25:04.661 "data_offset": 0, 00:25:04.661 "data_size": 0 00:25:04.661 }, 00:25:04.661 { 00:25:04.661 "name": "BaseBdev3", 00:25:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.661 "is_configured": false, 00:25:04.661 "data_offset": 0, 00:25:04.661 "data_size": 0 00:25:04.661 }, 00:25:04.661 { 00:25:04.661 "name": "BaseBdev4", 00:25:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.661 "is_configured": false, 00:25:04.661 "data_offset": 0, 00:25:04.661 "data_size": 0 00:25:04.661 } 00:25:04.661 ] 00:25:04.661 }' 00:25:04.661 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.661 09:51:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.919 09:51:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:05.178 [2024-07-15 09:51:33.081338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.178 BaseBdev2 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:05.178 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:05.435 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:05.692 [ 00:25:05.693 { 00:25:05.693 "name": "BaseBdev2", 00:25:05.693 "aliases": [ 00:25:05.693 "d149e419-428f-11ef-a0af-c98d8ee52a94" 00:25:05.693 ], 00:25:05.693 "product_name": "Malloc disk", 00:25:05.693 "block_size": 512, 00:25:05.693 "num_blocks": 65536, 00:25:05.693 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:05.693 "assigned_rate_limits": { 00:25:05.693 "rw_ios_per_sec": 0, 00:25:05.693 "rw_mbytes_per_sec": 0, 00:25:05.693 "r_mbytes_per_sec": 0, 00:25:05.693 "w_mbytes_per_sec": 0 00:25:05.693 }, 00:25:05.693 "claimed": true, 00:25:05.693 "claim_type": "exclusive_write", 00:25:05.693 "zoned": false, 00:25:05.693 "supported_io_types": { 00:25:05.693 "read": true, 00:25:05.693 "write": true, 00:25:05.693 "unmap": true, 00:25:05.693 "flush": true, 00:25:05.693 "reset": true, 00:25:05.693 "nvme_admin": false, 00:25:05.693 "nvme_io": false, 00:25:05.693 "nvme_io_md": false, 00:25:05.693 "write_zeroes": true, 00:25:05.693 "zcopy": true, 00:25:05.693 "get_zone_info": false, 00:25:05.693 "zone_management": false, 00:25:05.693 "zone_append": false, 00:25:05.693 "compare": false, 00:25:05.693 "compare_and_write": false, 00:25:05.693 "abort": true, 00:25:05.693 "seek_hole": false, 00:25:05.693 "seek_data": false, 00:25:05.693 "copy": true, 00:25:05.693 "nvme_iov_md": false 00:25:05.693 }, 00:25:05.693 "memory_domains": [ 00:25:05.693 { 00:25:05.693 "dma_device_id": "system", 00:25:05.693 "dma_device_type": 1 00:25:05.693 }, 00:25:05.693 { 00:25:05.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.693 "dma_device_type": 2 00:25:05.693 } 00:25:05.693 ], 00:25:05.693 "driver_specific": {} 00:25:05.693 } 00:25:05.693 ] 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.693 "name": "Existed_Raid", 00:25:05.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.693 "strip_size_kb": 64, 00:25:05.693 "state": "configuring", 00:25:05.693 "raid_level": "concat", 00:25:05.693 "superblock": false, 00:25:05.693 "num_base_bdevs": 4, 00:25:05.693 "num_base_bdevs_discovered": 2, 00:25:05.693 "num_base_bdevs_operational": 4, 00:25:05.693 "base_bdevs_list": [ 00:25:05.693 { 00:25:05.693 "name": "BaseBdev1", 00:25:05.693 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:05.693 "is_configured": true, 00:25:05.693 "data_offset": 0, 00:25:05.693 "data_size": 65536 00:25:05.693 }, 00:25:05.693 { 00:25:05.693 "name": "BaseBdev2", 00:25:05.693 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:05.693 "is_configured": true, 00:25:05.693 "data_offset": 0, 00:25:05.693 "data_size": 65536 00:25:05.693 }, 00:25:05.693 { 00:25:05.693 "name": "BaseBdev3", 00:25:05.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.693 "is_configured": false, 00:25:05.693 "data_offset": 0, 00:25:05.693 "data_size": 0 00:25:05.693 }, 00:25:05.693 { 00:25:05.693 "name": "BaseBdev4", 00:25:05.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.693 "is_configured": false, 00:25:05.693 "data_offset": 0, 00:25:05.693 "data_size": 0 00:25:05.693 } 00:25:05.693 ] 00:25:05.693 }' 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.693 09:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:06.259 [2024-07-15 09:51:34.245330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:06.259 BaseBdev3 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:06.259 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.517 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:06.824 [ 00:25:06.824 { 00:25:06.824 "name": "BaseBdev3", 00:25:06.824 "aliases": [ 00:25:06.824 "d1fb8315-428f-11ef-a0af-c98d8ee52a94" 00:25:06.824 ], 00:25:06.824 "product_name": "Malloc disk", 00:25:06.824 "block_size": 512, 00:25:06.824 "num_blocks": 65536, 00:25:06.824 "uuid": "d1fb8315-428f-11ef-a0af-c98d8ee52a94", 00:25:06.824 "assigned_rate_limits": { 00:25:06.824 "rw_ios_per_sec": 0, 00:25:06.824 "rw_mbytes_per_sec": 0, 00:25:06.824 "r_mbytes_per_sec": 0, 00:25:06.824 "w_mbytes_per_sec": 0 00:25:06.824 }, 00:25:06.824 "claimed": true, 00:25:06.824 "claim_type": "exclusive_write", 00:25:06.824 "zoned": false, 00:25:06.824 "supported_io_types": { 00:25:06.824 "read": true, 00:25:06.824 "write": true, 00:25:06.824 "unmap": true, 00:25:06.824 "flush": true, 00:25:06.824 "reset": true, 00:25:06.824 "nvme_admin": false, 00:25:06.824 "nvme_io": false, 00:25:06.824 "nvme_io_md": false, 00:25:06.824 "write_zeroes": true, 00:25:06.824 "zcopy": true, 00:25:06.824 "get_zone_info": false, 00:25:06.824 "zone_management": false, 00:25:06.824 "zone_append": false, 00:25:06.824 "compare": false, 00:25:06.824 "compare_and_write": false, 00:25:06.824 "abort": true, 00:25:06.824 "seek_hole": false, 00:25:06.824 "seek_data": false, 00:25:06.824 "copy": true, 00:25:06.824 "nvme_iov_md": false 00:25:06.824 }, 00:25:06.824 "memory_domains": [ 00:25:06.824 { 00:25:06.824 "dma_device_id": "system", 00:25:06.824 "dma_device_type": 1 00:25:06.824 }, 00:25:06.824 { 00:25:06.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.824 "dma_device_type": 2 00:25:06.824 } 00:25:06.824 ], 00:25:06.824 "driver_specific": {} 00:25:06.824 } 00:25:06.824 ] 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.824 "name": "Existed_Raid", 00:25:06.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.824 "strip_size_kb": 64, 00:25:06.824 "state": "configuring", 00:25:06.824 "raid_level": "concat", 00:25:06.824 "superblock": false, 00:25:06.824 "num_base_bdevs": 4, 00:25:06.824 "num_base_bdevs_discovered": 3, 00:25:06.824 "num_base_bdevs_operational": 4, 00:25:06.824 "base_bdevs_list": [ 00:25:06.824 { 00:25:06.824 "name": "BaseBdev1", 00:25:06.824 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:06.824 "is_configured": true, 00:25:06.824 "data_offset": 0, 00:25:06.824 "data_size": 65536 00:25:06.824 }, 00:25:06.824 { 00:25:06.824 "name": "BaseBdev2", 00:25:06.824 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:06.824 "is_configured": true, 00:25:06.824 "data_offset": 0, 00:25:06.824 "data_size": 65536 00:25:06.824 }, 00:25:06.824 { 00:25:06.824 "name": "BaseBdev3", 00:25:06.824 "uuid": "d1fb8315-428f-11ef-a0af-c98d8ee52a94", 00:25:06.824 "is_configured": true, 00:25:06.824 "data_offset": 0, 00:25:06.824 "data_size": 65536 00:25:06.824 }, 00:25:06.824 { 00:25:06.824 "name": "BaseBdev4", 00:25:06.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.824 "is_configured": false, 00:25:06.824 "data_offset": 0, 00:25:06.824 "data_size": 0 00:25:06.824 } 00:25:06.824 ] 00:25:06.824 }' 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.824 09:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.084 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:07.343 [2024-07-15 09:51:35.337392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:07.343 [2024-07-15 09:51:35.337417] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2348dee34a00 00:25:07.343 [2024-07-15 09:51:35.337421] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:07.343 [2024-07-15 09:51:35.337449] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2348dee97e20 00:25:07.343 [2024-07-15 09:51:35.337549] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2348dee34a00 00:25:07.343 [2024-07-15 09:51:35.337553] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2348dee34a00 00:25:07.343 [2024-07-15 09:51:35.337584] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.343 BaseBdev4 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:07.343 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.603 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:07.863 [ 00:25:07.863 { 00:25:07.863 "name": "BaseBdev4", 00:25:07.863 "aliases": [ 00:25:07.863 "d2a225c5-428f-11ef-a0af-c98d8ee52a94" 00:25:07.863 ], 00:25:07.863 "product_name": "Malloc disk", 00:25:07.863 "block_size": 512, 00:25:07.863 "num_blocks": 65536, 00:25:07.863 "uuid": "d2a225c5-428f-11ef-a0af-c98d8ee52a94", 00:25:07.863 "assigned_rate_limits": { 00:25:07.863 "rw_ios_per_sec": 0, 00:25:07.863 "rw_mbytes_per_sec": 0, 00:25:07.863 "r_mbytes_per_sec": 0, 00:25:07.863 "w_mbytes_per_sec": 0 00:25:07.863 }, 00:25:07.863 "claimed": true, 00:25:07.863 "claim_type": "exclusive_write", 00:25:07.863 "zoned": false, 00:25:07.863 "supported_io_types": { 00:25:07.863 "read": true, 00:25:07.863 "write": true, 00:25:07.863 "unmap": true, 00:25:07.863 "flush": true, 00:25:07.863 "reset": true, 00:25:07.863 "nvme_admin": false, 00:25:07.863 "nvme_io": false, 00:25:07.863 "nvme_io_md": false, 00:25:07.863 "write_zeroes": true, 00:25:07.863 "zcopy": true, 00:25:07.863 "get_zone_info": false, 00:25:07.863 "zone_management": false, 00:25:07.863 "zone_append": false, 00:25:07.863 "compare": false, 00:25:07.863 "compare_and_write": false, 00:25:07.863 "abort": true, 00:25:07.863 "seek_hole": false, 00:25:07.863 "seek_data": false, 00:25:07.863 "copy": true, 00:25:07.863 "nvme_iov_md": false 00:25:07.863 }, 00:25:07.863 "memory_domains": [ 00:25:07.863 { 00:25:07.863 "dma_device_id": "system", 00:25:07.863 "dma_device_type": 1 00:25:07.863 }, 00:25:07.863 { 00:25:07.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.863 "dma_device_type": 2 00:25:07.863 } 00:25:07.863 ], 00:25:07.863 "driver_specific": {} 00:25:07.863 } 00:25:07.863 ] 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.863 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:07.863 "name": "Existed_Raid", 00:25:07.863 "uuid": "d2a22a87-428f-11ef-a0af-c98d8ee52a94", 00:25:07.864 "strip_size_kb": 64, 00:25:07.864 "state": "online", 00:25:07.864 "raid_level": "concat", 00:25:07.864 "superblock": false, 00:25:07.864 "num_base_bdevs": 4, 00:25:07.864 "num_base_bdevs_discovered": 4, 00:25:07.864 "num_base_bdevs_operational": 4, 00:25:07.864 "base_bdevs_list": [ 00:25:07.864 { 00:25:07.864 "name": "BaseBdev1", 00:25:07.864 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:07.864 "is_configured": true, 00:25:07.864 "data_offset": 0, 00:25:07.864 "data_size": 65536 00:25:07.864 }, 00:25:07.864 { 00:25:07.864 "name": "BaseBdev2", 00:25:07.864 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:07.864 "is_configured": true, 00:25:07.864 "data_offset": 0, 00:25:07.864 "data_size": 65536 00:25:07.864 }, 00:25:07.864 { 00:25:07.864 "name": "BaseBdev3", 00:25:07.864 "uuid": "d1fb8315-428f-11ef-a0af-c98d8ee52a94", 00:25:07.864 "is_configured": true, 00:25:07.864 "data_offset": 0, 00:25:07.864 "data_size": 65536 00:25:07.864 }, 00:25:07.864 { 00:25:07.864 "name": "BaseBdev4", 00:25:07.864 "uuid": "d2a225c5-428f-11ef-a0af-c98d8ee52a94", 00:25:07.864 "is_configured": true, 00:25:07.864 "data_offset": 0, 00:25:07.864 "data_size": 65536 00:25:07.864 } 00:25:07.864 ] 00:25:07.864 }' 00:25:07.864 09:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:07.864 09:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:08.434 [2024-07-15 09:51:36.413400] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:08.434 "name": "Existed_Raid", 00:25:08.434 "aliases": [ 00:25:08.434 "d2a22a87-428f-11ef-a0af-c98d8ee52a94" 00:25:08.434 ], 00:25:08.434 "product_name": "Raid Volume", 00:25:08.434 "block_size": 512, 00:25:08.434 "num_blocks": 262144, 00:25:08.434 "uuid": "d2a22a87-428f-11ef-a0af-c98d8ee52a94", 00:25:08.434 "assigned_rate_limits": { 00:25:08.434 "rw_ios_per_sec": 0, 00:25:08.434 "rw_mbytes_per_sec": 0, 00:25:08.434 "r_mbytes_per_sec": 0, 00:25:08.434 "w_mbytes_per_sec": 0 00:25:08.434 }, 00:25:08.434 "claimed": false, 00:25:08.434 "zoned": false, 00:25:08.434 "supported_io_types": { 00:25:08.434 "read": true, 00:25:08.434 "write": true, 00:25:08.434 "unmap": true, 00:25:08.434 "flush": true, 00:25:08.434 "reset": true, 00:25:08.434 "nvme_admin": false, 00:25:08.434 "nvme_io": false, 00:25:08.434 "nvme_io_md": false, 00:25:08.434 "write_zeroes": true, 00:25:08.434 "zcopy": false, 00:25:08.434 "get_zone_info": false, 00:25:08.434 "zone_management": false, 00:25:08.434 "zone_append": false, 00:25:08.434 "compare": false, 00:25:08.434 "compare_and_write": false, 00:25:08.434 "abort": false, 00:25:08.434 "seek_hole": false, 00:25:08.434 "seek_data": false, 00:25:08.434 "copy": false, 00:25:08.434 "nvme_iov_md": false 00:25:08.434 }, 00:25:08.434 "memory_domains": [ 00:25:08.434 { 00:25:08.434 "dma_device_id": "system", 00:25:08.434 "dma_device_type": 1 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.434 "dma_device_type": 2 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "system", 00:25:08.434 "dma_device_type": 1 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.434 "dma_device_type": 2 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "system", 00:25:08.434 "dma_device_type": 1 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.434 "dma_device_type": 2 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "system", 00:25:08.434 "dma_device_type": 1 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.434 "dma_device_type": 2 00:25:08.434 } 00:25:08.434 ], 00:25:08.434 "driver_specific": { 00:25:08.434 "raid": { 00:25:08.434 "uuid": "d2a22a87-428f-11ef-a0af-c98d8ee52a94", 00:25:08.434 "strip_size_kb": 64, 00:25:08.434 "state": "online", 00:25:08.434 "raid_level": "concat", 00:25:08.434 "superblock": false, 00:25:08.434 "num_base_bdevs": 4, 00:25:08.434 "num_base_bdevs_discovered": 4, 00:25:08.434 "num_base_bdevs_operational": 4, 00:25:08.434 "base_bdevs_list": [ 00:25:08.434 { 00:25:08.434 "name": "BaseBdev1", 00:25:08.434 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:08.434 "is_configured": true, 00:25:08.434 "data_offset": 0, 00:25:08.434 "data_size": 65536 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "name": "BaseBdev2", 00:25:08.434 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:08.434 "is_configured": true, 00:25:08.434 "data_offset": 0, 00:25:08.434 "data_size": 65536 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "name": "BaseBdev3", 00:25:08.434 "uuid": "d1fb8315-428f-11ef-a0af-c98d8ee52a94", 00:25:08.434 "is_configured": true, 00:25:08.434 "data_offset": 0, 00:25:08.434 "data_size": 65536 00:25:08.434 }, 00:25:08.434 { 00:25:08.434 "name": "BaseBdev4", 00:25:08.434 "uuid": "d2a225c5-428f-11ef-a0af-c98d8ee52a94", 00:25:08.434 "is_configured": true, 00:25:08.434 "data_offset": 0, 00:25:08.434 "data_size": 65536 00:25:08.434 } 00:25:08.434 ] 00:25:08.434 } 00:25:08.434 } 00:25:08.434 }' 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:08.434 BaseBdev2 00:25:08.434 BaseBdev3 00:25:08.434 BaseBdev4' 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:08.434 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:08.695 "name": "BaseBdev1", 00:25:08.695 "aliases": [ 00:25:08.695 "d000e28a-428f-11ef-a0af-c98d8ee52a94" 00:25:08.695 ], 00:25:08.695 "product_name": "Malloc disk", 00:25:08.695 "block_size": 512, 00:25:08.695 "num_blocks": 65536, 00:25:08.695 "uuid": "d000e28a-428f-11ef-a0af-c98d8ee52a94", 00:25:08.695 "assigned_rate_limits": { 00:25:08.695 "rw_ios_per_sec": 0, 00:25:08.695 "rw_mbytes_per_sec": 0, 00:25:08.695 "r_mbytes_per_sec": 0, 00:25:08.695 "w_mbytes_per_sec": 0 00:25:08.695 }, 00:25:08.695 "claimed": true, 00:25:08.695 "claim_type": "exclusive_write", 00:25:08.695 "zoned": false, 00:25:08.695 "supported_io_types": { 00:25:08.695 "read": true, 00:25:08.695 "write": true, 00:25:08.695 "unmap": true, 00:25:08.695 "flush": true, 00:25:08.695 "reset": true, 00:25:08.695 "nvme_admin": false, 00:25:08.695 "nvme_io": false, 00:25:08.695 "nvme_io_md": false, 00:25:08.695 "write_zeroes": true, 00:25:08.695 "zcopy": true, 00:25:08.695 "get_zone_info": false, 00:25:08.695 "zone_management": false, 00:25:08.695 "zone_append": false, 00:25:08.695 "compare": false, 00:25:08.695 "compare_and_write": false, 00:25:08.695 "abort": true, 00:25:08.695 "seek_hole": false, 00:25:08.695 "seek_data": false, 00:25:08.695 "copy": true, 00:25:08.695 "nvme_iov_md": false 00:25:08.695 }, 00:25:08.695 "memory_domains": [ 00:25:08.695 { 00:25:08.695 "dma_device_id": "system", 00:25:08.695 "dma_device_type": 1 00:25:08.695 }, 00:25:08.695 { 00:25:08.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.695 "dma_device_type": 2 00:25:08.695 } 00:25:08.695 ], 00:25:08.695 "driver_specific": {} 00:25:08.695 }' 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:08.695 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:08.955 "name": "BaseBdev2", 00:25:08.955 "aliases": [ 00:25:08.955 "d149e419-428f-11ef-a0af-c98d8ee52a94" 00:25:08.955 ], 00:25:08.955 "product_name": "Malloc disk", 00:25:08.955 "block_size": 512, 00:25:08.955 "num_blocks": 65536, 00:25:08.955 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:08.955 "assigned_rate_limits": { 00:25:08.955 "rw_ios_per_sec": 0, 00:25:08.955 "rw_mbytes_per_sec": 0, 00:25:08.955 "r_mbytes_per_sec": 0, 00:25:08.955 "w_mbytes_per_sec": 0 00:25:08.955 }, 00:25:08.955 "claimed": true, 00:25:08.955 "claim_type": "exclusive_write", 00:25:08.955 "zoned": false, 00:25:08.955 "supported_io_types": { 00:25:08.955 "read": true, 00:25:08.955 "write": true, 00:25:08.955 "unmap": true, 00:25:08.955 "flush": true, 00:25:08.955 "reset": true, 00:25:08.955 "nvme_admin": false, 00:25:08.955 "nvme_io": false, 00:25:08.955 "nvme_io_md": false, 00:25:08.955 "write_zeroes": true, 00:25:08.955 "zcopy": true, 00:25:08.955 "get_zone_info": false, 00:25:08.955 "zone_management": false, 00:25:08.955 "zone_append": false, 00:25:08.955 "compare": false, 00:25:08.955 "compare_and_write": false, 00:25:08.955 "abort": true, 00:25:08.955 "seek_hole": false, 00:25:08.955 "seek_data": false, 00:25:08.955 "copy": true, 00:25:08.955 "nvme_iov_md": false 00:25:08.955 }, 00:25:08.955 "memory_domains": [ 00:25:08.955 { 00:25:08.955 "dma_device_id": "system", 00:25:08.955 "dma_device_type": 1 00:25:08.955 }, 00:25:08.955 { 00:25:08.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.955 "dma_device_type": 2 00:25:08.955 } 00:25:08.955 ], 00:25:08.955 "driver_specific": {} 00:25:08.955 }' 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:08.955 09:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:09.213 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:09.213 "name": "BaseBdev3", 00:25:09.213 "aliases": [ 00:25:09.213 "d1fb8315-428f-11ef-a0af-c98d8ee52a94" 00:25:09.213 ], 00:25:09.213 "product_name": "Malloc disk", 00:25:09.213 "block_size": 512, 00:25:09.213 "num_blocks": 65536, 00:25:09.213 "uuid": "d1fb8315-428f-11ef-a0af-c98d8ee52a94", 00:25:09.213 "assigned_rate_limits": { 00:25:09.213 "rw_ios_per_sec": 0, 00:25:09.213 "rw_mbytes_per_sec": 0, 00:25:09.213 "r_mbytes_per_sec": 0, 00:25:09.213 "w_mbytes_per_sec": 0 00:25:09.213 }, 00:25:09.213 "claimed": true, 00:25:09.213 "claim_type": "exclusive_write", 00:25:09.213 "zoned": false, 00:25:09.213 "supported_io_types": { 00:25:09.213 "read": true, 00:25:09.213 "write": true, 00:25:09.213 "unmap": true, 00:25:09.213 "flush": true, 00:25:09.213 "reset": true, 00:25:09.213 "nvme_admin": false, 00:25:09.213 "nvme_io": false, 00:25:09.213 "nvme_io_md": false, 00:25:09.213 "write_zeroes": true, 00:25:09.213 "zcopy": true, 00:25:09.213 "get_zone_info": false, 00:25:09.213 "zone_management": false, 00:25:09.214 "zone_append": false, 00:25:09.214 "compare": false, 00:25:09.214 "compare_and_write": false, 00:25:09.214 "abort": true, 00:25:09.214 "seek_hole": false, 00:25:09.214 "seek_data": false, 00:25:09.214 "copy": true, 00:25:09.214 "nvme_iov_md": false 00:25:09.214 }, 00:25:09.214 "memory_domains": [ 00:25:09.214 { 00:25:09.214 "dma_device_id": "system", 00:25:09.214 "dma_device_type": 1 00:25:09.214 }, 00:25:09.214 { 00:25:09.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.214 "dma_device_type": 2 00:25:09.214 } 00:25:09.214 ], 00:25:09.214 "driver_specific": {} 00:25:09.214 }' 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:09.214 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:09.474 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:09.474 "name": "BaseBdev4", 00:25:09.474 "aliases": [ 00:25:09.474 "d2a225c5-428f-11ef-a0af-c98d8ee52a94" 00:25:09.474 ], 00:25:09.474 "product_name": "Malloc disk", 00:25:09.474 "block_size": 512, 00:25:09.474 "num_blocks": 65536, 00:25:09.474 "uuid": "d2a225c5-428f-11ef-a0af-c98d8ee52a94", 00:25:09.474 "assigned_rate_limits": { 00:25:09.474 "rw_ios_per_sec": 0, 00:25:09.474 "rw_mbytes_per_sec": 0, 00:25:09.474 "r_mbytes_per_sec": 0, 00:25:09.474 "w_mbytes_per_sec": 0 00:25:09.474 }, 00:25:09.474 "claimed": true, 00:25:09.474 "claim_type": "exclusive_write", 00:25:09.474 "zoned": false, 00:25:09.474 "supported_io_types": { 00:25:09.474 "read": true, 00:25:09.474 "write": true, 00:25:09.474 "unmap": true, 00:25:09.474 "flush": true, 00:25:09.474 "reset": true, 00:25:09.474 "nvme_admin": false, 00:25:09.474 "nvme_io": false, 00:25:09.474 "nvme_io_md": false, 00:25:09.474 "write_zeroes": true, 00:25:09.474 "zcopy": true, 00:25:09.474 "get_zone_info": false, 00:25:09.474 "zone_management": false, 00:25:09.474 "zone_append": false, 00:25:09.474 "compare": false, 00:25:09.474 "compare_and_write": false, 00:25:09.474 "abort": true, 00:25:09.474 "seek_hole": false, 00:25:09.474 "seek_data": false, 00:25:09.474 "copy": true, 00:25:09.474 "nvme_iov_md": false 00:25:09.475 }, 00:25:09.475 "memory_domains": [ 00:25:09.475 { 00:25:09.475 "dma_device_id": "system", 00:25:09.475 "dma_device_type": 1 00:25:09.475 }, 00:25:09.475 { 00:25:09.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.475 "dma_device_type": 2 00:25:09.475 } 00:25:09.475 ], 00:25:09.475 "driver_specific": {} 00:25:09.475 }' 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:09.475 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:09.735 [2024-07-15 09:51:37.769458] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:09.735 [2024-07-15 09:51:37.769507] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:09.735 [2024-07-15 09:51:37.769518] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.735 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.009 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.009 "name": "Existed_Raid", 00:25:10.009 "uuid": "d2a22a87-428f-11ef-a0af-c98d8ee52a94", 00:25:10.009 "strip_size_kb": 64, 00:25:10.009 "state": "offline", 00:25:10.009 "raid_level": "concat", 00:25:10.009 "superblock": false, 00:25:10.009 "num_base_bdevs": 4, 00:25:10.009 "num_base_bdevs_discovered": 3, 00:25:10.009 "num_base_bdevs_operational": 3, 00:25:10.009 "base_bdevs_list": [ 00:25:10.009 { 00:25:10.009 "name": null, 00:25:10.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.009 "is_configured": false, 00:25:10.009 "data_offset": 0, 00:25:10.009 "data_size": 65536 00:25:10.009 }, 00:25:10.009 { 00:25:10.009 "name": "BaseBdev2", 00:25:10.009 "uuid": "d149e419-428f-11ef-a0af-c98d8ee52a94", 00:25:10.009 "is_configured": true, 00:25:10.009 "data_offset": 0, 00:25:10.009 "data_size": 65536 00:25:10.009 }, 00:25:10.009 { 00:25:10.009 "name": "BaseBdev3", 00:25:10.009 "uuid": "d1fb8315-428f-11ef-a0af-c98d8ee52a94", 00:25:10.009 "is_configured": true, 00:25:10.009 "data_offset": 0, 00:25:10.009 "data_size": 65536 00:25:10.009 }, 00:25:10.009 { 00:25:10.009 "name": "BaseBdev4", 00:25:10.009 "uuid": "d2a225c5-428f-11ef-a0af-c98d8ee52a94", 00:25:10.009 "is_configured": true, 00:25:10.009 "data_offset": 0, 00:25:10.009 "data_size": 65536 00:25:10.009 } 00:25:10.009 ] 00:25:10.009 }' 00:25:10.009 09:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.009 09:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.269 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:10.269 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:10.269 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.269 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:10.527 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:10.527 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:10.527 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:10.785 [2024-07-15 09:51:38.674160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:10.785 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:10.785 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:10.785 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.785 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:11.044 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:11.044 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:11.044 09:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:11.044 [2024-07-15 09:51:39.110888] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:11.044 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:11.044 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:11.044 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.044 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:11.302 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:11.302 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:11.302 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:11.561 [2024-07-15 09:51:39.515345] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:11.561 [2024-07-15 09:51:39.515367] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2348dee34a00 name Existed_Raid, state offline 00:25:11.561 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:11.561 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:11.561 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:11.561 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.820 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:11.820 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:11.820 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:11.820 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:11.820 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:11.820 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:11.820 BaseBdev2 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:12.078 09:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.078 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.336 [ 00:25:12.336 { 00:25:12.336 "name": "BaseBdev2", 00:25:12.336 "aliases": [ 00:25:12.336 "d55c22f3-428f-11ef-a0af-c98d8ee52a94" 00:25:12.336 ], 00:25:12.336 "product_name": "Malloc disk", 00:25:12.336 "block_size": 512, 00:25:12.336 "num_blocks": 65536, 00:25:12.336 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:12.336 "assigned_rate_limits": { 00:25:12.336 "rw_ios_per_sec": 0, 00:25:12.336 "rw_mbytes_per_sec": 0, 00:25:12.336 "r_mbytes_per_sec": 0, 00:25:12.336 "w_mbytes_per_sec": 0 00:25:12.336 }, 00:25:12.336 "claimed": false, 00:25:12.336 "zoned": false, 00:25:12.336 "supported_io_types": { 00:25:12.336 "read": true, 00:25:12.336 "write": true, 00:25:12.336 "unmap": true, 00:25:12.336 "flush": true, 00:25:12.336 "reset": true, 00:25:12.336 "nvme_admin": false, 00:25:12.336 "nvme_io": false, 00:25:12.336 "nvme_io_md": false, 00:25:12.336 "write_zeroes": true, 00:25:12.336 "zcopy": true, 00:25:12.336 "get_zone_info": false, 00:25:12.336 "zone_management": false, 00:25:12.336 "zone_append": false, 00:25:12.336 "compare": false, 00:25:12.336 "compare_and_write": false, 00:25:12.336 "abort": true, 00:25:12.336 "seek_hole": false, 00:25:12.336 "seek_data": false, 00:25:12.336 "copy": true, 00:25:12.336 "nvme_iov_md": false 00:25:12.336 }, 00:25:12.336 "memory_domains": [ 00:25:12.336 { 00:25:12.336 "dma_device_id": "system", 00:25:12.336 "dma_device_type": 1 00:25:12.336 }, 00:25:12.336 { 00:25:12.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.336 "dma_device_type": 2 00:25:12.336 } 00:25:12.336 ], 00:25:12.336 "driver_specific": {} 00:25:12.336 } 00:25:12.336 ] 00:25:12.336 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:12.336 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:12.336 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:12.336 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:12.594 BaseBdev3 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:12.594 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.890 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:12.890 [ 00:25:12.890 { 00:25:12.890 "name": "BaseBdev3", 00:25:12.890 "aliases": [ 00:25:12.890 "d5bac030-428f-11ef-a0af-c98d8ee52a94" 00:25:12.890 ], 00:25:12.890 "product_name": "Malloc disk", 00:25:12.890 "block_size": 512, 00:25:12.890 "num_blocks": 65536, 00:25:12.890 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:12.890 "assigned_rate_limits": { 00:25:12.890 "rw_ios_per_sec": 0, 00:25:12.890 "rw_mbytes_per_sec": 0, 00:25:12.890 "r_mbytes_per_sec": 0, 00:25:12.890 "w_mbytes_per_sec": 0 00:25:12.890 }, 00:25:12.890 "claimed": false, 00:25:12.890 "zoned": false, 00:25:12.890 "supported_io_types": { 00:25:12.890 "read": true, 00:25:12.890 "write": true, 00:25:12.890 "unmap": true, 00:25:12.890 "flush": true, 00:25:12.890 "reset": true, 00:25:12.890 "nvme_admin": false, 00:25:12.890 "nvme_io": false, 00:25:12.890 "nvme_io_md": false, 00:25:12.890 "write_zeroes": true, 00:25:12.890 "zcopy": true, 00:25:12.890 "get_zone_info": false, 00:25:12.890 "zone_management": false, 00:25:12.890 "zone_append": false, 00:25:12.890 "compare": false, 00:25:12.890 "compare_and_write": false, 00:25:12.890 "abort": true, 00:25:12.890 "seek_hole": false, 00:25:12.890 "seek_data": false, 00:25:12.890 "copy": true, 00:25:12.890 "nvme_iov_md": false 00:25:12.890 }, 00:25:12.890 "memory_domains": [ 00:25:12.890 { 00:25:12.890 "dma_device_id": "system", 00:25:12.890 "dma_device_type": 1 00:25:12.890 }, 00:25:12.890 { 00:25:12.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.890 "dma_device_type": 2 00:25:12.890 } 00:25:12.890 ], 00:25:12.890 "driver_specific": {} 00:25:12.890 } 00:25:12.890 ] 00:25:12.890 09:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:12.890 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:12.890 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:12.890 09:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:13.148 BaseBdev4 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:13.148 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:13.407 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:13.407 [ 00:25:13.407 { 00:25:13.407 "name": "BaseBdev4", 00:25:13.407 "aliases": [ 00:25:13.407 "d61209a6-428f-11ef-a0af-c98d8ee52a94" 00:25:13.407 ], 00:25:13.407 "product_name": "Malloc disk", 00:25:13.407 "block_size": 512, 00:25:13.407 "num_blocks": 65536, 00:25:13.407 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:13.407 "assigned_rate_limits": { 00:25:13.407 "rw_ios_per_sec": 0, 00:25:13.407 "rw_mbytes_per_sec": 0, 00:25:13.407 "r_mbytes_per_sec": 0, 00:25:13.407 "w_mbytes_per_sec": 0 00:25:13.407 }, 00:25:13.407 "claimed": false, 00:25:13.407 "zoned": false, 00:25:13.407 "supported_io_types": { 00:25:13.407 "read": true, 00:25:13.407 "write": true, 00:25:13.407 "unmap": true, 00:25:13.407 "flush": true, 00:25:13.407 "reset": true, 00:25:13.407 "nvme_admin": false, 00:25:13.407 "nvme_io": false, 00:25:13.407 "nvme_io_md": false, 00:25:13.407 "write_zeroes": true, 00:25:13.407 "zcopy": true, 00:25:13.407 "get_zone_info": false, 00:25:13.407 "zone_management": false, 00:25:13.407 "zone_append": false, 00:25:13.407 "compare": false, 00:25:13.407 "compare_and_write": false, 00:25:13.407 "abort": true, 00:25:13.407 "seek_hole": false, 00:25:13.407 "seek_data": false, 00:25:13.407 "copy": true, 00:25:13.407 "nvme_iov_md": false 00:25:13.407 }, 00:25:13.407 "memory_domains": [ 00:25:13.407 { 00:25:13.407 "dma_device_id": "system", 00:25:13.407 "dma_device_type": 1 00:25:13.407 }, 00:25:13.407 { 00:25:13.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.407 "dma_device_type": 2 00:25:13.407 } 00:25:13.407 ], 00:25:13.407 "driver_specific": {} 00:25:13.407 } 00:25:13.407 ] 00:25:13.407 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:13.407 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:13.407 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:13.407 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:13.665 [2024-07-15 09:51:41.691789] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:13.665 [2024-07-15 09:51:41.691851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:13.665 [2024-07-15 09:51:41.691858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.665 [2024-07-15 09:51:41.692454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.665 [2024-07-15 09:51:41.692473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.665 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.923 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.923 "name": "Existed_Raid", 00:25:13.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.923 "strip_size_kb": 64, 00:25:13.923 "state": "configuring", 00:25:13.923 "raid_level": "concat", 00:25:13.923 "superblock": false, 00:25:13.923 "num_base_bdevs": 4, 00:25:13.924 "num_base_bdevs_discovered": 3, 00:25:13.924 "num_base_bdevs_operational": 4, 00:25:13.924 "base_bdevs_list": [ 00:25:13.924 { 00:25:13.924 "name": "BaseBdev1", 00:25:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.924 "is_configured": false, 00:25:13.924 "data_offset": 0, 00:25:13.924 "data_size": 0 00:25:13.924 }, 00:25:13.924 { 00:25:13.924 "name": "BaseBdev2", 00:25:13.924 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:13.924 "is_configured": true, 00:25:13.924 "data_offset": 0, 00:25:13.924 "data_size": 65536 00:25:13.924 }, 00:25:13.924 { 00:25:13.924 "name": "BaseBdev3", 00:25:13.924 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:13.924 "is_configured": true, 00:25:13.924 "data_offset": 0, 00:25:13.924 "data_size": 65536 00:25:13.924 }, 00:25:13.924 { 00:25:13.924 "name": "BaseBdev4", 00:25:13.924 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:13.924 "is_configured": true, 00:25:13.924 "data_offset": 0, 00:25:13.924 "data_size": 65536 00:25:13.924 } 00:25:13.924 ] 00:25:13.924 }' 00:25:13.924 09:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.924 09:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.182 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:14.440 [2024-07-15 09:51:42.363884] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.440 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.698 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.698 "name": "Existed_Raid", 00:25:14.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.698 "strip_size_kb": 64, 00:25:14.698 "state": "configuring", 00:25:14.698 "raid_level": "concat", 00:25:14.698 "superblock": false, 00:25:14.698 "num_base_bdevs": 4, 00:25:14.698 "num_base_bdevs_discovered": 2, 00:25:14.698 "num_base_bdevs_operational": 4, 00:25:14.698 "base_bdevs_list": [ 00:25:14.698 { 00:25:14.698 "name": "BaseBdev1", 00:25:14.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.699 "is_configured": false, 00:25:14.699 "data_offset": 0, 00:25:14.699 "data_size": 0 00:25:14.699 }, 00:25:14.699 { 00:25:14.699 "name": null, 00:25:14.699 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:14.699 "is_configured": false, 00:25:14.699 "data_offset": 0, 00:25:14.699 "data_size": 65536 00:25:14.699 }, 00:25:14.699 { 00:25:14.699 "name": "BaseBdev3", 00:25:14.699 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:14.699 "is_configured": true, 00:25:14.699 "data_offset": 0, 00:25:14.699 "data_size": 65536 00:25:14.699 }, 00:25:14.699 { 00:25:14.699 "name": "BaseBdev4", 00:25:14.699 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:14.699 "is_configured": true, 00:25:14.699 "data_offset": 0, 00:25:14.699 "data_size": 65536 00:25:14.699 } 00:25:14.699 ] 00:25:14.699 }' 00:25:14.699 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.699 09:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.957 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.957 09:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:15.216 [2024-07-15 09:51:43.264061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.216 BaseBdev1 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:15.216 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:15.475 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:15.734 [ 00:25:15.734 { 00:25:15.734 "name": "BaseBdev1", 00:25:15.734 "aliases": [ 00:25:15.734 "d75ba778-428f-11ef-a0af-c98d8ee52a94" 00:25:15.734 ], 00:25:15.734 "product_name": "Malloc disk", 00:25:15.734 "block_size": 512, 00:25:15.734 "num_blocks": 65536, 00:25:15.734 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:15.734 "assigned_rate_limits": { 00:25:15.734 "rw_ios_per_sec": 0, 00:25:15.734 "rw_mbytes_per_sec": 0, 00:25:15.734 "r_mbytes_per_sec": 0, 00:25:15.734 "w_mbytes_per_sec": 0 00:25:15.734 }, 00:25:15.734 "claimed": true, 00:25:15.734 "claim_type": "exclusive_write", 00:25:15.734 "zoned": false, 00:25:15.734 "supported_io_types": { 00:25:15.734 "read": true, 00:25:15.734 "write": true, 00:25:15.734 "unmap": true, 00:25:15.734 "flush": true, 00:25:15.734 "reset": true, 00:25:15.734 "nvme_admin": false, 00:25:15.734 "nvme_io": false, 00:25:15.734 "nvme_io_md": false, 00:25:15.734 "write_zeroes": true, 00:25:15.734 "zcopy": true, 00:25:15.734 "get_zone_info": false, 00:25:15.734 "zone_management": false, 00:25:15.734 "zone_append": false, 00:25:15.734 "compare": false, 00:25:15.734 "compare_and_write": false, 00:25:15.734 "abort": true, 00:25:15.734 "seek_hole": false, 00:25:15.734 "seek_data": false, 00:25:15.734 "copy": true, 00:25:15.734 "nvme_iov_md": false 00:25:15.734 }, 00:25:15.734 "memory_domains": [ 00:25:15.734 { 00:25:15.734 "dma_device_id": "system", 00:25:15.734 "dma_device_type": 1 00:25:15.734 }, 00:25:15.734 { 00:25:15.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.734 "dma_device_type": 2 00:25:15.734 } 00:25:15.734 ], 00:25:15.734 "driver_specific": {} 00:25:15.734 } 00:25:15.734 ] 00:25:15.734 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:15.734 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:15.734 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.735 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.993 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.993 "name": "Existed_Raid", 00:25:15.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.993 "strip_size_kb": 64, 00:25:15.993 "state": "configuring", 00:25:15.993 "raid_level": "concat", 00:25:15.993 "superblock": false, 00:25:15.993 "num_base_bdevs": 4, 00:25:15.993 "num_base_bdevs_discovered": 3, 00:25:15.993 "num_base_bdevs_operational": 4, 00:25:15.993 "base_bdevs_list": [ 00:25:15.993 { 00:25:15.993 "name": "BaseBdev1", 00:25:15.994 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:15.994 "is_configured": true, 00:25:15.994 "data_offset": 0, 00:25:15.994 "data_size": 65536 00:25:15.994 }, 00:25:15.994 { 00:25:15.994 "name": null, 00:25:15.994 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:15.994 "is_configured": false, 00:25:15.994 "data_offset": 0, 00:25:15.994 "data_size": 65536 00:25:15.994 }, 00:25:15.994 { 00:25:15.994 "name": "BaseBdev3", 00:25:15.994 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:15.994 "is_configured": true, 00:25:15.994 "data_offset": 0, 00:25:15.994 "data_size": 65536 00:25:15.994 }, 00:25:15.994 { 00:25:15.994 "name": "BaseBdev4", 00:25:15.994 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:15.994 "is_configured": true, 00:25:15.994 "data_offset": 0, 00:25:15.994 "data_size": 65536 00:25:15.994 } 00:25:15.994 ] 00:25:15.994 }' 00:25:15.994 09:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.994 09:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.253 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.253 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:16.253 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:16.253 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:16.512 [2024-07-15 09:51:44.520006] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.512 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.770 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.770 "name": "Existed_Raid", 00:25:16.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.770 "strip_size_kb": 64, 00:25:16.770 "state": "configuring", 00:25:16.770 "raid_level": "concat", 00:25:16.770 "superblock": false, 00:25:16.770 "num_base_bdevs": 4, 00:25:16.770 "num_base_bdevs_discovered": 2, 00:25:16.770 "num_base_bdevs_operational": 4, 00:25:16.770 "base_bdevs_list": [ 00:25:16.770 { 00:25:16.770 "name": "BaseBdev1", 00:25:16.770 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:16.770 "is_configured": true, 00:25:16.770 "data_offset": 0, 00:25:16.770 "data_size": 65536 00:25:16.770 }, 00:25:16.770 { 00:25:16.770 "name": null, 00:25:16.770 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:16.770 "is_configured": false, 00:25:16.770 "data_offset": 0, 00:25:16.770 "data_size": 65536 00:25:16.770 }, 00:25:16.770 { 00:25:16.770 "name": null, 00:25:16.770 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:16.770 "is_configured": false, 00:25:16.770 "data_offset": 0, 00:25:16.770 "data_size": 65536 00:25:16.770 }, 00:25:16.770 { 00:25:16.770 "name": "BaseBdev4", 00:25:16.770 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:16.770 "is_configured": true, 00:25:16.770 "data_offset": 0, 00:25:16.770 "data_size": 65536 00:25:16.770 } 00:25:16.770 ] 00:25:16.770 }' 00:25:16.770 09:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.770 09:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.029 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.029 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:17.287 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:17.287 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:17.546 [2024-07-15 09:51:45.432060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.546 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.804 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.804 "name": "Existed_Raid", 00:25:17.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.804 "strip_size_kb": 64, 00:25:17.804 "state": "configuring", 00:25:17.804 "raid_level": "concat", 00:25:17.804 "superblock": false, 00:25:17.804 "num_base_bdevs": 4, 00:25:17.804 "num_base_bdevs_discovered": 3, 00:25:17.804 "num_base_bdevs_operational": 4, 00:25:17.804 "base_bdevs_list": [ 00:25:17.804 { 00:25:17.804 "name": "BaseBdev1", 00:25:17.804 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:17.804 "is_configured": true, 00:25:17.804 "data_offset": 0, 00:25:17.804 "data_size": 65536 00:25:17.804 }, 00:25:17.804 { 00:25:17.804 "name": null, 00:25:17.804 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:17.804 "is_configured": false, 00:25:17.804 "data_offset": 0, 00:25:17.804 "data_size": 65536 00:25:17.804 }, 00:25:17.804 { 00:25:17.804 "name": "BaseBdev3", 00:25:17.804 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:17.804 "is_configured": true, 00:25:17.804 "data_offset": 0, 00:25:17.804 "data_size": 65536 00:25:17.804 }, 00:25:17.804 { 00:25:17.804 "name": "BaseBdev4", 00:25:17.804 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:17.804 "is_configured": true, 00:25:17.804 "data_offset": 0, 00:25:17.804 "data_size": 65536 00:25:17.804 } 00:25:17.804 ] 00:25:17.804 }' 00:25:17.804 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.804 09:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.096 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:18.096 09:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.096 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:18.096 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:18.389 [2024-07-15 09:51:46.368117] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.389 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.647 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.647 "name": "Existed_Raid", 00:25:18.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.647 "strip_size_kb": 64, 00:25:18.647 "state": "configuring", 00:25:18.647 "raid_level": "concat", 00:25:18.647 "superblock": false, 00:25:18.647 "num_base_bdevs": 4, 00:25:18.647 "num_base_bdevs_discovered": 2, 00:25:18.647 "num_base_bdevs_operational": 4, 00:25:18.647 "base_bdevs_list": [ 00:25:18.647 { 00:25:18.647 "name": null, 00:25:18.647 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:18.647 "is_configured": false, 00:25:18.647 "data_offset": 0, 00:25:18.647 "data_size": 65536 00:25:18.647 }, 00:25:18.647 { 00:25:18.647 "name": null, 00:25:18.647 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:18.647 "is_configured": false, 00:25:18.647 "data_offset": 0, 00:25:18.647 "data_size": 65536 00:25:18.647 }, 00:25:18.647 { 00:25:18.647 "name": "BaseBdev3", 00:25:18.647 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:18.647 "is_configured": true, 00:25:18.647 "data_offset": 0, 00:25:18.647 "data_size": 65536 00:25:18.647 }, 00:25:18.647 { 00:25:18.647 "name": "BaseBdev4", 00:25:18.647 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:18.647 "is_configured": true, 00:25:18.647 "data_offset": 0, 00:25:18.647 "data_size": 65536 00:25:18.647 } 00:25:18.647 ] 00:25:18.647 }' 00:25:18.647 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.647 09:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.905 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.905 09:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:19.163 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:19.163 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:19.422 [2024-07-15 09:51:47.308956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.422 "name": "Existed_Raid", 00:25:19.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.422 "strip_size_kb": 64, 00:25:19.422 "state": "configuring", 00:25:19.422 "raid_level": "concat", 00:25:19.422 "superblock": false, 00:25:19.422 "num_base_bdevs": 4, 00:25:19.422 "num_base_bdevs_discovered": 3, 00:25:19.422 "num_base_bdevs_operational": 4, 00:25:19.422 "base_bdevs_list": [ 00:25:19.422 { 00:25:19.422 "name": null, 00:25:19.422 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:19.422 "is_configured": false, 00:25:19.422 "data_offset": 0, 00:25:19.422 "data_size": 65536 00:25:19.422 }, 00:25:19.422 { 00:25:19.422 "name": "BaseBdev2", 00:25:19.422 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:19.422 "is_configured": true, 00:25:19.422 "data_offset": 0, 00:25:19.422 "data_size": 65536 00:25:19.422 }, 00:25:19.422 { 00:25:19.422 "name": "BaseBdev3", 00:25:19.422 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:19.422 "is_configured": true, 00:25:19.422 "data_offset": 0, 00:25:19.422 "data_size": 65536 00:25:19.422 }, 00:25:19.422 { 00:25:19.422 "name": "BaseBdev4", 00:25:19.422 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:19.422 "is_configured": true, 00:25:19.422 "data_offset": 0, 00:25:19.422 "data_size": 65536 00:25:19.422 } 00:25:19.422 ] 00:25:19.422 }' 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.422 09:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.990 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.990 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:19.990 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:19.991 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.991 09:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:20.248 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d75ba778-428f-11ef-a0af-c98d8ee52a94 00:25:20.506 [2024-07-15 09:51:48.357132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:20.506 [2024-07-15 09:51:48.357159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2348dee34f00 00:25:20.506 [2024-07-15 09:51:48.357162] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:20.506 [2024-07-15 09:51:48.357184] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2348dee97e20 00:25:20.506 [2024-07-15 09:51:48.357262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2348dee34f00 00:25:20.506 [2024-07-15 09:51:48.357266] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2348dee34f00 00:25:20.506 [2024-07-15 09:51:48.357295] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.506 NewBaseBdev 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.506 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:20.765 [ 00:25:20.765 { 00:25:20.765 "name": "NewBaseBdev", 00:25:20.765 "aliases": [ 00:25:20.765 "d75ba778-428f-11ef-a0af-c98d8ee52a94" 00:25:20.765 ], 00:25:20.765 "product_name": "Malloc disk", 00:25:20.765 "block_size": 512, 00:25:20.765 "num_blocks": 65536, 00:25:20.765 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:20.765 "assigned_rate_limits": { 00:25:20.765 "rw_ios_per_sec": 0, 00:25:20.765 "rw_mbytes_per_sec": 0, 00:25:20.765 "r_mbytes_per_sec": 0, 00:25:20.765 "w_mbytes_per_sec": 0 00:25:20.765 }, 00:25:20.765 "claimed": true, 00:25:20.765 "claim_type": "exclusive_write", 00:25:20.765 "zoned": false, 00:25:20.765 "supported_io_types": { 00:25:20.765 "read": true, 00:25:20.765 "write": true, 00:25:20.765 "unmap": true, 00:25:20.765 "flush": true, 00:25:20.765 "reset": true, 00:25:20.765 "nvme_admin": false, 00:25:20.765 "nvme_io": false, 00:25:20.765 "nvme_io_md": false, 00:25:20.765 "write_zeroes": true, 00:25:20.765 "zcopy": true, 00:25:20.765 "get_zone_info": false, 00:25:20.765 "zone_management": false, 00:25:20.765 "zone_append": false, 00:25:20.765 "compare": false, 00:25:20.765 "compare_and_write": false, 00:25:20.765 "abort": true, 00:25:20.765 "seek_hole": false, 00:25:20.765 "seek_data": false, 00:25:20.765 "copy": true, 00:25:20.765 "nvme_iov_md": false 00:25:20.765 }, 00:25:20.765 "memory_domains": [ 00:25:20.765 { 00:25:20.765 "dma_device_id": "system", 00:25:20.765 "dma_device_type": 1 00:25:20.765 }, 00:25:20.765 { 00:25:20.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.765 "dma_device_type": 2 00:25:20.765 } 00:25:20.765 ], 00:25:20.765 "driver_specific": {} 00:25:20.765 } 00:25:20.765 ] 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.765 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.024 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.024 "name": "Existed_Raid", 00:25:21.024 "uuid": "da64d1e9-428f-11ef-a0af-c98d8ee52a94", 00:25:21.024 "strip_size_kb": 64, 00:25:21.024 "state": "online", 00:25:21.024 "raid_level": "concat", 00:25:21.024 "superblock": false, 00:25:21.024 "num_base_bdevs": 4, 00:25:21.024 "num_base_bdevs_discovered": 4, 00:25:21.024 "num_base_bdevs_operational": 4, 00:25:21.024 "base_bdevs_list": [ 00:25:21.024 { 00:25:21.024 "name": "NewBaseBdev", 00:25:21.024 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:21.024 "is_configured": true, 00:25:21.024 "data_offset": 0, 00:25:21.024 "data_size": 65536 00:25:21.024 }, 00:25:21.024 { 00:25:21.024 "name": "BaseBdev2", 00:25:21.024 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:21.024 "is_configured": true, 00:25:21.024 "data_offset": 0, 00:25:21.024 "data_size": 65536 00:25:21.024 }, 00:25:21.024 { 00:25:21.024 "name": "BaseBdev3", 00:25:21.024 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:21.024 "is_configured": true, 00:25:21.024 "data_offset": 0, 00:25:21.024 "data_size": 65536 00:25:21.024 }, 00:25:21.024 { 00:25:21.024 "name": "BaseBdev4", 00:25:21.024 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:21.024 "is_configured": true, 00:25:21.024 "data_offset": 0, 00:25:21.025 "data_size": 65536 00:25:21.025 } 00:25:21.025 ] 00:25:21.025 }' 00:25:21.025 09:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.025 09:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:21.283 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:21.543 [2024-07-15 09:51:49.445127] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:21.543 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:21.543 "name": "Existed_Raid", 00:25:21.543 "aliases": [ 00:25:21.543 "da64d1e9-428f-11ef-a0af-c98d8ee52a94" 00:25:21.543 ], 00:25:21.543 "product_name": "Raid Volume", 00:25:21.543 "block_size": 512, 00:25:21.543 "num_blocks": 262144, 00:25:21.543 "uuid": "da64d1e9-428f-11ef-a0af-c98d8ee52a94", 00:25:21.543 "assigned_rate_limits": { 00:25:21.543 "rw_ios_per_sec": 0, 00:25:21.543 "rw_mbytes_per_sec": 0, 00:25:21.543 "r_mbytes_per_sec": 0, 00:25:21.543 "w_mbytes_per_sec": 0 00:25:21.543 }, 00:25:21.543 "claimed": false, 00:25:21.543 "zoned": false, 00:25:21.543 "supported_io_types": { 00:25:21.543 "read": true, 00:25:21.543 "write": true, 00:25:21.543 "unmap": true, 00:25:21.543 "flush": true, 00:25:21.543 "reset": true, 00:25:21.543 "nvme_admin": false, 00:25:21.543 "nvme_io": false, 00:25:21.543 "nvme_io_md": false, 00:25:21.543 "write_zeroes": true, 00:25:21.543 "zcopy": false, 00:25:21.543 "get_zone_info": false, 00:25:21.543 "zone_management": false, 00:25:21.543 "zone_append": false, 00:25:21.543 "compare": false, 00:25:21.543 "compare_and_write": false, 00:25:21.543 "abort": false, 00:25:21.543 "seek_hole": false, 00:25:21.543 "seek_data": false, 00:25:21.543 "copy": false, 00:25:21.543 "nvme_iov_md": false 00:25:21.543 }, 00:25:21.543 "memory_domains": [ 00:25:21.543 { 00:25:21.543 "dma_device_id": "system", 00:25:21.543 "dma_device_type": 1 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.543 "dma_device_type": 2 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "system", 00:25:21.543 "dma_device_type": 1 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.543 "dma_device_type": 2 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "system", 00:25:21.543 "dma_device_type": 1 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.543 "dma_device_type": 2 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "system", 00:25:21.543 "dma_device_type": 1 00:25:21.543 }, 00:25:21.543 { 00:25:21.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.543 "dma_device_type": 2 00:25:21.543 } 00:25:21.543 ], 00:25:21.543 "driver_specific": { 00:25:21.543 "raid": { 00:25:21.543 "uuid": "da64d1e9-428f-11ef-a0af-c98d8ee52a94", 00:25:21.543 "strip_size_kb": 64, 00:25:21.543 "state": "online", 00:25:21.543 "raid_level": "concat", 00:25:21.543 "superblock": false, 00:25:21.543 "num_base_bdevs": 4, 00:25:21.543 "num_base_bdevs_discovered": 4, 00:25:21.543 "num_base_bdevs_operational": 4, 00:25:21.543 "base_bdevs_list": [ 00:25:21.543 { 00:25:21.543 "name": "NewBaseBdev", 00:25:21.543 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:21.543 "is_configured": true, 00:25:21.543 "data_offset": 0, 00:25:21.543 "data_size": 65536 00:25:21.543 }, 00:25:21.543 { 00:25:21.544 "name": "BaseBdev2", 00:25:21.544 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:21.544 "is_configured": true, 00:25:21.544 "data_offset": 0, 00:25:21.544 "data_size": 65536 00:25:21.544 }, 00:25:21.544 { 00:25:21.544 "name": "BaseBdev3", 00:25:21.544 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:21.544 "is_configured": true, 00:25:21.544 "data_offset": 0, 00:25:21.544 "data_size": 65536 00:25:21.544 }, 00:25:21.544 { 00:25:21.544 "name": "BaseBdev4", 00:25:21.544 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:21.544 "is_configured": true, 00:25:21.544 "data_offset": 0, 00:25:21.544 "data_size": 65536 00:25:21.544 } 00:25:21.544 ] 00:25:21.544 } 00:25:21.544 } 00:25:21.544 }' 00:25:21.544 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:21.544 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:21.544 BaseBdev2 00:25:21.544 BaseBdev3 00:25:21.544 BaseBdev4' 00:25:21.544 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:21.544 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:21.544 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:21.803 "name": "NewBaseBdev", 00:25:21.803 "aliases": [ 00:25:21.803 "d75ba778-428f-11ef-a0af-c98d8ee52a94" 00:25:21.803 ], 00:25:21.803 "product_name": "Malloc disk", 00:25:21.803 "block_size": 512, 00:25:21.803 "num_blocks": 65536, 00:25:21.803 "uuid": "d75ba778-428f-11ef-a0af-c98d8ee52a94", 00:25:21.803 "assigned_rate_limits": { 00:25:21.803 "rw_ios_per_sec": 0, 00:25:21.803 "rw_mbytes_per_sec": 0, 00:25:21.803 "r_mbytes_per_sec": 0, 00:25:21.803 "w_mbytes_per_sec": 0 00:25:21.803 }, 00:25:21.803 "claimed": true, 00:25:21.803 "claim_type": "exclusive_write", 00:25:21.803 "zoned": false, 00:25:21.803 "supported_io_types": { 00:25:21.803 "read": true, 00:25:21.803 "write": true, 00:25:21.803 "unmap": true, 00:25:21.803 "flush": true, 00:25:21.803 "reset": true, 00:25:21.803 "nvme_admin": false, 00:25:21.803 "nvme_io": false, 00:25:21.803 "nvme_io_md": false, 00:25:21.803 "write_zeroes": true, 00:25:21.803 "zcopy": true, 00:25:21.803 "get_zone_info": false, 00:25:21.803 "zone_management": false, 00:25:21.803 "zone_append": false, 00:25:21.803 "compare": false, 00:25:21.803 "compare_and_write": false, 00:25:21.803 "abort": true, 00:25:21.803 "seek_hole": false, 00:25:21.803 "seek_data": false, 00:25:21.803 "copy": true, 00:25:21.803 "nvme_iov_md": false 00:25:21.803 }, 00:25:21.803 "memory_domains": [ 00:25:21.803 { 00:25:21.803 "dma_device_id": "system", 00:25:21.803 "dma_device_type": 1 00:25:21.803 }, 00:25:21.803 { 00:25:21.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.803 "dma_device_type": 2 00:25:21.803 } 00:25:21.803 ], 00:25:21.803 "driver_specific": {} 00:25:21.803 }' 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:21.803 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.062 "name": "BaseBdev2", 00:25:22.062 "aliases": [ 00:25:22.062 "d55c22f3-428f-11ef-a0af-c98d8ee52a94" 00:25:22.062 ], 00:25:22.062 "product_name": "Malloc disk", 00:25:22.062 "block_size": 512, 00:25:22.062 "num_blocks": 65536, 00:25:22.062 "uuid": "d55c22f3-428f-11ef-a0af-c98d8ee52a94", 00:25:22.062 "assigned_rate_limits": { 00:25:22.062 "rw_ios_per_sec": 0, 00:25:22.062 "rw_mbytes_per_sec": 0, 00:25:22.062 "r_mbytes_per_sec": 0, 00:25:22.062 "w_mbytes_per_sec": 0 00:25:22.062 }, 00:25:22.062 "claimed": true, 00:25:22.062 "claim_type": "exclusive_write", 00:25:22.062 "zoned": false, 00:25:22.062 "supported_io_types": { 00:25:22.062 "read": true, 00:25:22.062 "write": true, 00:25:22.062 "unmap": true, 00:25:22.062 "flush": true, 00:25:22.062 "reset": true, 00:25:22.062 "nvme_admin": false, 00:25:22.062 "nvme_io": false, 00:25:22.062 "nvme_io_md": false, 00:25:22.062 "write_zeroes": true, 00:25:22.062 "zcopy": true, 00:25:22.062 "get_zone_info": false, 00:25:22.062 "zone_management": false, 00:25:22.062 "zone_append": false, 00:25:22.062 "compare": false, 00:25:22.062 "compare_and_write": false, 00:25:22.062 "abort": true, 00:25:22.062 "seek_hole": false, 00:25:22.062 "seek_data": false, 00:25:22.062 "copy": true, 00:25:22.062 "nvme_iov_md": false 00:25:22.062 }, 00:25:22.062 "memory_domains": [ 00:25:22.062 { 00:25:22.062 "dma_device_id": "system", 00:25:22.062 "dma_device_type": 1 00:25:22.062 }, 00:25:22.062 { 00:25:22.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.062 "dma_device_type": 2 00:25:22.062 } 00:25:22.062 ], 00:25:22.062 "driver_specific": {} 00:25:22.062 }' 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:22.062 09:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:22.062 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.322 "name": "BaseBdev3", 00:25:22.322 "aliases": [ 00:25:22.322 "d5bac030-428f-11ef-a0af-c98d8ee52a94" 00:25:22.322 ], 00:25:22.322 "product_name": "Malloc disk", 00:25:22.322 "block_size": 512, 00:25:22.322 "num_blocks": 65536, 00:25:22.322 "uuid": "d5bac030-428f-11ef-a0af-c98d8ee52a94", 00:25:22.322 "assigned_rate_limits": { 00:25:22.322 "rw_ios_per_sec": 0, 00:25:22.322 "rw_mbytes_per_sec": 0, 00:25:22.322 "r_mbytes_per_sec": 0, 00:25:22.322 "w_mbytes_per_sec": 0 00:25:22.322 }, 00:25:22.322 "claimed": true, 00:25:22.322 "claim_type": "exclusive_write", 00:25:22.322 "zoned": false, 00:25:22.322 "supported_io_types": { 00:25:22.322 "read": true, 00:25:22.322 "write": true, 00:25:22.322 "unmap": true, 00:25:22.322 "flush": true, 00:25:22.322 "reset": true, 00:25:22.322 "nvme_admin": false, 00:25:22.322 "nvme_io": false, 00:25:22.322 "nvme_io_md": false, 00:25:22.322 "write_zeroes": true, 00:25:22.322 "zcopy": true, 00:25:22.322 "get_zone_info": false, 00:25:22.322 "zone_management": false, 00:25:22.322 "zone_append": false, 00:25:22.322 "compare": false, 00:25:22.322 "compare_and_write": false, 00:25:22.322 "abort": true, 00:25:22.322 "seek_hole": false, 00:25:22.322 "seek_data": false, 00:25:22.322 "copy": true, 00:25:22.322 "nvme_iov_md": false 00:25:22.322 }, 00:25:22.322 "memory_domains": [ 00:25:22.322 { 00:25:22.322 "dma_device_id": "system", 00:25:22.322 "dma_device_type": 1 00:25:22.322 }, 00:25:22.322 { 00:25:22.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.322 "dma_device_type": 2 00:25:22.322 } 00:25:22.322 ], 00:25:22.322 "driver_specific": {} 00:25:22.322 }' 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:22.322 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.581 "name": "BaseBdev4", 00:25:22.581 "aliases": [ 00:25:22.581 "d61209a6-428f-11ef-a0af-c98d8ee52a94" 00:25:22.581 ], 00:25:22.581 "product_name": "Malloc disk", 00:25:22.581 "block_size": 512, 00:25:22.581 "num_blocks": 65536, 00:25:22.581 "uuid": "d61209a6-428f-11ef-a0af-c98d8ee52a94", 00:25:22.581 "assigned_rate_limits": { 00:25:22.581 "rw_ios_per_sec": 0, 00:25:22.581 "rw_mbytes_per_sec": 0, 00:25:22.581 "r_mbytes_per_sec": 0, 00:25:22.581 "w_mbytes_per_sec": 0 00:25:22.581 }, 00:25:22.581 "claimed": true, 00:25:22.581 "claim_type": "exclusive_write", 00:25:22.581 "zoned": false, 00:25:22.581 "supported_io_types": { 00:25:22.581 "read": true, 00:25:22.581 "write": true, 00:25:22.581 "unmap": true, 00:25:22.581 "flush": true, 00:25:22.581 "reset": true, 00:25:22.581 "nvme_admin": false, 00:25:22.581 "nvme_io": false, 00:25:22.581 "nvme_io_md": false, 00:25:22.581 "write_zeroes": true, 00:25:22.581 "zcopy": true, 00:25:22.581 "get_zone_info": false, 00:25:22.581 "zone_management": false, 00:25:22.581 "zone_append": false, 00:25:22.581 "compare": false, 00:25:22.581 "compare_and_write": false, 00:25:22.581 "abort": true, 00:25:22.581 "seek_hole": false, 00:25:22.581 "seek_data": false, 00:25:22.581 "copy": true, 00:25:22.581 "nvme_iov_md": false 00:25:22.581 }, 00:25:22.581 "memory_domains": [ 00:25:22.581 { 00:25:22.581 "dma_device_id": "system", 00:25:22.581 "dma_device_type": 1 00:25:22.581 }, 00:25:22.581 { 00:25:22.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.581 "dma_device_type": 2 00:25:22.581 } 00:25:22.581 ], 00:25:22.581 "driver_specific": {} 00:25:22.581 }' 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:22.581 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:22.840 [2024-07-15 09:51:50.849177] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:22.840 [2024-07-15 09:51:50.849204] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.840 [2024-07-15 09:51:50.849220] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.840 [2024-07-15 09:51:50.849234] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.840 [2024-07-15 09:51:50.849238] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2348dee34f00 name Existed_Raid, state offline 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60511 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60511 ']' 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60511 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60511 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:25:22.840 killing process with pid 60511 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60511' 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60511 00:25:22.840 [2024-07-15 09:51:50.879584] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:22.840 09:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60511 00:25:22.840 [2024-07-15 09:51:50.914001] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:23.098 09:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:23.098 00:25:23.098 real 0m22.575s 00:25:23.098 user 0m40.027s 00:25:23.098 sys 0m4.325s 00:25:23.098 09:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.098 09:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.098 ************************************ 00:25:23.098 END TEST raid_state_function_test 00:25:23.098 ************************************ 00:25:23.357 09:51:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:23.357 09:51:51 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:25:23.357 09:51:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:23.357 09:51:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.357 09:51:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:23.357 ************************************ 00:25:23.357 START TEST raid_state_function_test_sb 00:25:23.357 ************************************ 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61314 00:25:23.357 Process raid pid: 61314 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61314' 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61314 /var/tmp/spdk-raid.sock 00:25:23.357 09:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:23.358 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61314 ']' 00:25:23.358 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:23.358 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:23.358 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:23.358 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.358 09:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.358 [2024-07-15 09:51:51.250536] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:23.358 [2024-07-15 09:51:51.250843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:25:23.617 EAL: TSC is not safe to use in SMP mode 00:25:23.617 EAL: TSC is not invariant 00:25:23.617 [2024-07-15 09:51:51.683502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.875 [2024-07-15 09:51:51.801234] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:23.875 [2024-07-15 09:51:51.803781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.875 [2024-07-15 09:51:51.804531] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.875 [2024-07-15 09:51:51.804544] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:24.149 09:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.149 09:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:24.149 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:24.408 [2024-07-15 09:51:52.368251] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:24.408 [2024-07-15 09:51:52.368314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:24.408 [2024-07-15 09:51:52.368319] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:24.408 [2024-07-15 09:51:52.368327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:24.408 [2024-07-15 09:51:52.368330] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:24.408 [2024-07-15 09:51:52.368336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:24.408 [2024-07-15 09:51:52.368339] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:24.408 [2024-07-15 09:51:52.368346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.408 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.409 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.409 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.668 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.668 "name": "Existed_Raid", 00:25:24.668 "uuid": "dcc8dc63-428f-11ef-a0af-c98d8ee52a94", 00:25:24.668 "strip_size_kb": 64, 00:25:24.668 "state": "configuring", 00:25:24.668 "raid_level": "concat", 00:25:24.668 "superblock": true, 00:25:24.668 "num_base_bdevs": 4, 00:25:24.668 "num_base_bdevs_discovered": 0, 00:25:24.668 "num_base_bdevs_operational": 4, 00:25:24.668 "base_bdevs_list": [ 00:25:24.668 { 00:25:24.668 "name": "BaseBdev1", 00:25:24.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.668 "is_configured": false, 00:25:24.668 "data_offset": 0, 00:25:24.668 "data_size": 0 00:25:24.668 }, 00:25:24.668 { 00:25:24.668 "name": "BaseBdev2", 00:25:24.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.668 "is_configured": false, 00:25:24.668 "data_offset": 0, 00:25:24.668 "data_size": 0 00:25:24.668 }, 00:25:24.668 { 00:25:24.668 "name": "BaseBdev3", 00:25:24.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.668 "is_configured": false, 00:25:24.668 "data_offset": 0, 00:25:24.668 "data_size": 0 00:25:24.668 }, 00:25:24.668 { 00:25:24.668 "name": "BaseBdev4", 00:25:24.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.668 "is_configured": false, 00:25:24.668 "data_offset": 0, 00:25:24.668 "data_size": 0 00:25:24.668 } 00:25:24.668 ] 00:25:24.668 }' 00:25:24.669 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.669 09:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.928 09:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:25.188 [2024-07-15 09:51:53.116264] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:25.188 [2024-07-15 09:51:53.116299] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdabbfa34500 name Existed_Raid, state configuring 00:25:25.188 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:25.447 [2024-07-15 09:51:53.344291] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:25.447 [2024-07-15 09:51:53.344350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:25.447 [2024-07-15 09:51:53.344359] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:25.447 [2024-07-15 09:51:53.344373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:25.447 [2024-07-15 09:51:53.344382] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:25.447 [2024-07-15 09:51:53.344396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:25.447 [2024-07-15 09:51:53.344404] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:25.447 [2024-07-15 09:51:53.344417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:25.447 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:25.447 [2024-07-15 09:51:53.541511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.447 BaseBdev1 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:25.705 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:25.964 [ 00:25:25.964 { 00:25:25.964 "name": "BaseBdev1", 00:25:25.964 "aliases": [ 00:25:25.964 "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94" 00:25:25.964 ], 00:25:25.964 "product_name": "Malloc disk", 00:25:25.964 "block_size": 512, 00:25:25.964 "num_blocks": 65536, 00:25:25.964 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:25.964 "assigned_rate_limits": { 00:25:25.964 "rw_ios_per_sec": 0, 00:25:25.964 "rw_mbytes_per_sec": 0, 00:25:25.964 "r_mbytes_per_sec": 0, 00:25:25.964 "w_mbytes_per_sec": 0 00:25:25.964 }, 00:25:25.964 "claimed": true, 00:25:25.964 "claim_type": "exclusive_write", 00:25:25.964 "zoned": false, 00:25:25.964 "supported_io_types": { 00:25:25.964 "read": true, 00:25:25.964 "write": true, 00:25:25.964 "unmap": true, 00:25:25.964 "flush": true, 00:25:25.964 "reset": true, 00:25:25.964 "nvme_admin": false, 00:25:25.964 "nvme_io": false, 00:25:25.964 "nvme_io_md": false, 00:25:25.964 "write_zeroes": true, 00:25:25.964 "zcopy": true, 00:25:25.964 "get_zone_info": false, 00:25:25.964 "zone_management": false, 00:25:25.964 "zone_append": false, 00:25:25.964 "compare": false, 00:25:25.964 "compare_and_write": false, 00:25:25.964 "abort": true, 00:25:25.964 "seek_hole": false, 00:25:25.964 "seek_data": false, 00:25:25.964 "copy": true, 00:25:25.964 "nvme_iov_md": false 00:25:25.964 }, 00:25:25.964 "memory_domains": [ 00:25:25.964 { 00:25:25.964 "dma_device_id": "system", 00:25:25.964 "dma_device_type": 1 00:25:25.964 }, 00:25:25.964 { 00:25:25.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.964 "dma_device_type": 2 00:25:25.964 } 00:25:25.964 ], 00:25:25.964 "driver_specific": {} 00:25:25.964 } 00:25:25.964 ] 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.964 09:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.222 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.222 "name": "Existed_Raid", 00:25:26.222 "uuid": "dd5dcb07-428f-11ef-a0af-c98d8ee52a94", 00:25:26.222 "strip_size_kb": 64, 00:25:26.222 "state": "configuring", 00:25:26.222 "raid_level": "concat", 00:25:26.222 "superblock": true, 00:25:26.222 "num_base_bdevs": 4, 00:25:26.222 "num_base_bdevs_discovered": 1, 00:25:26.222 "num_base_bdevs_operational": 4, 00:25:26.222 "base_bdevs_list": [ 00:25:26.222 { 00:25:26.222 "name": "BaseBdev1", 00:25:26.222 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:26.222 "is_configured": true, 00:25:26.222 "data_offset": 2048, 00:25:26.222 "data_size": 63488 00:25:26.222 }, 00:25:26.222 { 00:25:26.222 "name": "BaseBdev2", 00:25:26.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.222 "is_configured": false, 00:25:26.222 "data_offset": 0, 00:25:26.222 "data_size": 0 00:25:26.222 }, 00:25:26.222 { 00:25:26.222 "name": "BaseBdev3", 00:25:26.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.222 "is_configured": false, 00:25:26.222 "data_offset": 0, 00:25:26.222 "data_size": 0 00:25:26.222 }, 00:25:26.222 { 00:25:26.222 "name": "BaseBdev4", 00:25:26.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.222 "is_configured": false, 00:25:26.222 "data_offset": 0, 00:25:26.222 "data_size": 0 00:25:26.222 } 00:25:26.222 ] 00:25:26.222 }' 00:25:26.222 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.222 09:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.481 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:26.739 [2024-07-15 09:51:54.644353] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:26.739 [2024-07-15 09:51:54.644397] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdabbfa34500 name Existed_Raid, state configuring 00:25:26.739 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:26.997 [2024-07-15 09:51:54.916384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:26.997 [2024-07-15 09:51:54.917342] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:26.997 [2024-07-15 09:51:54.917399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:26.997 [2024-07-15 09:51:54.917405] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:26.997 [2024-07-15 09:51:54.917412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:26.997 [2024-07-15 09:51:54.917416] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:26.997 [2024-07-15 09:51:54.917422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.997 09:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.256 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.256 "name": "Existed_Raid", 00:25:27.256 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:27.256 "strip_size_kb": 64, 00:25:27.256 "state": "configuring", 00:25:27.256 "raid_level": "concat", 00:25:27.256 "superblock": true, 00:25:27.256 "num_base_bdevs": 4, 00:25:27.256 "num_base_bdevs_discovered": 1, 00:25:27.256 "num_base_bdevs_operational": 4, 00:25:27.256 "base_bdevs_list": [ 00:25:27.256 { 00:25:27.256 "name": "BaseBdev1", 00:25:27.256 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:27.256 "is_configured": true, 00:25:27.256 "data_offset": 2048, 00:25:27.256 "data_size": 63488 00:25:27.256 }, 00:25:27.256 { 00:25:27.256 "name": "BaseBdev2", 00:25:27.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.256 "is_configured": false, 00:25:27.256 "data_offset": 0, 00:25:27.256 "data_size": 0 00:25:27.256 }, 00:25:27.256 { 00:25:27.256 "name": "BaseBdev3", 00:25:27.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.256 "is_configured": false, 00:25:27.256 "data_offset": 0, 00:25:27.256 "data_size": 0 00:25:27.256 }, 00:25:27.256 { 00:25:27.256 "name": "BaseBdev4", 00:25:27.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.256 "is_configured": false, 00:25:27.256 "data_offset": 0, 00:25:27.256 "data_size": 0 00:25:27.256 } 00:25:27.256 ] 00:25:27.256 }' 00:25:27.256 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.256 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:27.514 [2024-07-15 09:51:55.580529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.514 BaseBdev2 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:27.514 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:27.772 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:28.030 [ 00:25:28.030 { 00:25:28.030 "name": "BaseBdev2", 00:25:28.030 "aliases": [ 00:25:28.030 "deb2ff23-428f-11ef-a0af-c98d8ee52a94" 00:25:28.030 ], 00:25:28.030 "product_name": "Malloc disk", 00:25:28.030 "block_size": 512, 00:25:28.030 "num_blocks": 65536, 00:25:28.030 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:28.030 "assigned_rate_limits": { 00:25:28.030 "rw_ios_per_sec": 0, 00:25:28.030 "rw_mbytes_per_sec": 0, 00:25:28.030 "r_mbytes_per_sec": 0, 00:25:28.030 "w_mbytes_per_sec": 0 00:25:28.030 }, 00:25:28.030 "claimed": true, 00:25:28.030 "claim_type": "exclusive_write", 00:25:28.030 "zoned": false, 00:25:28.030 "supported_io_types": { 00:25:28.030 "read": true, 00:25:28.030 "write": true, 00:25:28.030 "unmap": true, 00:25:28.030 "flush": true, 00:25:28.030 "reset": true, 00:25:28.030 "nvme_admin": false, 00:25:28.030 "nvme_io": false, 00:25:28.030 "nvme_io_md": false, 00:25:28.030 "write_zeroes": true, 00:25:28.030 "zcopy": true, 00:25:28.030 "get_zone_info": false, 00:25:28.030 "zone_management": false, 00:25:28.030 "zone_append": false, 00:25:28.030 "compare": false, 00:25:28.030 "compare_and_write": false, 00:25:28.030 "abort": true, 00:25:28.030 "seek_hole": false, 00:25:28.030 "seek_data": false, 00:25:28.030 "copy": true, 00:25:28.030 "nvme_iov_md": false 00:25:28.030 }, 00:25:28.030 "memory_domains": [ 00:25:28.030 { 00:25:28.030 "dma_device_id": "system", 00:25:28.030 "dma_device_type": 1 00:25:28.030 }, 00:25:28.030 { 00:25:28.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.030 "dma_device_type": 2 00:25:28.030 } 00:25:28.030 ], 00:25:28.030 "driver_specific": {} 00:25:28.030 } 00:25:28.030 ] 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.030 09:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.288 09:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.288 "name": "Existed_Raid", 00:25:28.288 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:28.288 "strip_size_kb": 64, 00:25:28.288 "state": "configuring", 00:25:28.288 "raid_level": "concat", 00:25:28.288 "superblock": true, 00:25:28.288 "num_base_bdevs": 4, 00:25:28.289 "num_base_bdevs_discovered": 2, 00:25:28.289 "num_base_bdevs_operational": 4, 00:25:28.289 "base_bdevs_list": [ 00:25:28.289 { 00:25:28.289 "name": "BaseBdev1", 00:25:28.289 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:28.289 "is_configured": true, 00:25:28.289 "data_offset": 2048, 00:25:28.289 "data_size": 63488 00:25:28.289 }, 00:25:28.289 { 00:25:28.289 "name": "BaseBdev2", 00:25:28.289 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:28.289 "is_configured": true, 00:25:28.289 "data_offset": 2048, 00:25:28.289 "data_size": 63488 00:25:28.289 }, 00:25:28.289 { 00:25:28.289 "name": "BaseBdev3", 00:25:28.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.289 "is_configured": false, 00:25:28.289 "data_offset": 0, 00:25:28.289 "data_size": 0 00:25:28.289 }, 00:25:28.289 { 00:25:28.289 "name": "BaseBdev4", 00:25:28.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.289 "is_configured": false, 00:25:28.289 "data_offset": 0, 00:25:28.289 "data_size": 0 00:25:28.289 } 00:25:28.289 ] 00:25:28.289 }' 00:25:28.289 09:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.289 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.547 09:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:28.805 [2024-07-15 09:51:56.656565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:28.805 BaseBdev3 00:25:28.805 09:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:28.805 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:28.805 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:28.805 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:28.805 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:28.806 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:28.806 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:28.806 09:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:29.064 [ 00:25:29.064 { 00:25:29.064 "name": "BaseBdev3", 00:25:29.064 "aliases": [ 00:25:29.064 "df57313c-428f-11ef-a0af-c98d8ee52a94" 00:25:29.064 ], 00:25:29.064 "product_name": "Malloc disk", 00:25:29.064 "block_size": 512, 00:25:29.064 "num_blocks": 65536, 00:25:29.064 "uuid": "df57313c-428f-11ef-a0af-c98d8ee52a94", 00:25:29.064 "assigned_rate_limits": { 00:25:29.064 "rw_ios_per_sec": 0, 00:25:29.064 "rw_mbytes_per_sec": 0, 00:25:29.064 "r_mbytes_per_sec": 0, 00:25:29.064 "w_mbytes_per_sec": 0 00:25:29.064 }, 00:25:29.064 "claimed": true, 00:25:29.064 "claim_type": "exclusive_write", 00:25:29.064 "zoned": false, 00:25:29.064 "supported_io_types": { 00:25:29.064 "read": true, 00:25:29.064 "write": true, 00:25:29.064 "unmap": true, 00:25:29.064 "flush": true, 00:25:29.064 "reset": true, 00:25:29.064 "nvme_admin": false, 00:25:29.064 "nvme_io": false, 00:25:29.064 "nvme_io_md": false, 00:25:29.064 "write_zeroes": true, 00:25:29.064 "zcopy": true, 00:25:29.064 "get_zone_info": false, 00:25:29.064 "zone_management": false, 00:25:29.064 "zone_append": false, 00:25:29.064 "compare": false, 00:25:29.064 "compare_and_write": false, 00:25:29.064 "abort": true, 00:25:29.064 "seek_hole": false, 00:25:29.064 "seek_data": false, 00:25:29.064 "copy": true, 00:25:29.064 "nvme_iov_md": false 00:25:29.064 }, 00:25:29.064 "memory_domains": [ 00:25:29.064 { 00:25:29.064 "dma_device_id": "system", 00:25:29.064 "dma_device_type": 1 00:25:29.064 }, 00:25:29.064 { 00:25:29.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.064 "dma_device_type": 2 00:25:29.064 } 00:25:29.064 ], 00:25:29.064 "driver_specific": {} 00:25:29.064 } 00:25:29.064 ] 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.064 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.322 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.322 "name": "Existed_Raid", 00:25:29.322 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:29.322 "strip_size_kb": 64, 00:25:29.322 "state": "configuring", 00:25:29.322 "raid_level": "concat", 00:25:29.322 "superblock": true, 00:25:29.322 "num_base_bdevs": 4, 00:25:29.322 "num_base_bdevs_discovered": 3, 00:25:29.322 "num_base_bdevs_operational": 4, 00:25:29.322 "base_bdevs_list": [ 00:25:29.322 { 00:25:29.322 "name": "BaseBdev1", 00:25:29.322 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:29.322 "is_configured": true, 00:25:29.322 "data_offset": 2048, 00:25:29.322 "data_size": 63488 00:25:29.322 }, 00:25:29.322 { 00:25:29.322 "name": "BaseBdev2", 00:25:29.322 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:29.322 "is_configured": true, 00:25:29.322 "data_offset": 2048, 00:25:29.322 "data_size": 63488 00:25:29.322 }, 00:25:29.322 { 00:25:29.322 "name": "BaseBdev3", 00:25:29.322 "uuid": "df57313c-428f-11ef-a0af-c98d8ee52a94", 00:25:29.322 "is_configured": true, 00:25:29.322 "data_offset": 2048, 00:25:29.322 "data_size": 63488 00:25:29.322 }, 00:25:29.322 { 00:25:29.322 "name": "BaseBdev4", 00:25:29.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.322 "is_configured": false, 00:25:29.322 "data_offset": 0, 00:25:29.322 "data_size": 0 00:25:29.322 } 00:25:29.322 ] 00:25:29.322 }' 00:25:29.322 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.322 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.580 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:29.838 [2024-07-15 09:51:57.800638] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:29.838 [2024-07-15 09:51:57.800706] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdabbfa34a00 00:25:29.838 [2024-07-15 09:51:57.800711] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:29.838 [2024-07-15 09:51:57.800730] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdabbfa97e20 00:25:29.838 [2024-07-15 09:51:57.800774] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdabbfa34a00 00:25:29.838 [2024-07-15 09:51:57.800777] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xdabbfa34a00 00:25:29.838 [2024-07-15 09:51:57.800797] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.838 BaseBdev4 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:29.838 09:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:30.096 09:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:30.355 [ 00:25:30.355 { 00:25:30.355 "name": "BaseBdev4", 00:25:30.355 "aliases": [ 00:25:30.355 "e005c2fc-428f-11ef-a0af-c98d8ee52a94" 00:25:30.355 ], 00:25:30.355 "product_name": "Malloc disk", 00:25:30.355 "block_size": 512, 00:25:30.355 "num_blocks": 65536, 00:25:30.355 "uuid": "e005c2fc-428f-11ef-a0af-c98d8ee52a94", 00:25:30.355 "assigned_rate_limits": { 00:25:30.355 "rw_ios_per_sec": 0, 00:25:30.355 "rw_mbytes_per_sec": 0, 00:25:30.355 "r_mbytes_per_sec": 0, 00:25:30.355 "w_mbytes_per_sec": 0 00:25:30.355 }, 00:25:30.355 "claimed": true, 00:25:30.355 "claim_type": "exclusive_write", 00:25:30.355 "zoned": false, 00:25:30.355 "supported_io_types": { 00:25:30.355 "read": true, 00:25:30.355 "write": true, 00:25:30.355 "unmap": true, 00:25:30.355 "flush": true, 00:25:30.355 "reset": true, 00:25:30.355 "nvme_admin": false, 00:25:30.355 "nvme_io": false, 00:25:30.355 "nvme_io_md": false, 00:25:30.355 "write_zeroes": true, 00:25:30.355 "zcopy": true, 00:25:30.355 "get_zone_info": false, 00:25:30.355 "zone_management": false, 00:25:30.355 "zone_append": false, 00:25:30.355 "compare": false, 00:25:30.355 "compare_and_write": false, 00:25:30.355 "abort": true, 00:25:30.355 "seek_hole": false, 00:25:30.355 "seek_data": false, 00:25:30.355 "copy": true, 00:25:30.355 "nvme_iov_md": false 00:25:30.355 }, 00:25:30.355 "memory_domains": [ 00:25:30.355 { 00:25:30.355 "dma_device_id": "system", 00:25:30.355 "dma_device_type": 1 00:25:30.355 }, 00:25:30.355 { 00:25:30.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.355 "dma_device_type": 2 00:25:30.355 } 00:25:30.355 ], 00:25:30.355 "driver_specific": {} 00:25:30.355 } 00:25:30.355 ] 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.355 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.615 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.615 "name": "Existed_Raid", 00:25:30.615 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:30.615 "strip_size_kb": 64, 00:25:30.615 "state": "online", 00:25:30.615 "raid_level": "concat", 00:25:30.615 "superblock": true, 00:25:30.615 "num_base_bdevs": 4, 00:25:30.615 "num_base_bdevs_discovered": 4, 00:25:30.615 "num_base_bdevs_operational": 4, 00:25:30.615 "base_bdevs_list": [ 00:25:30.615 { 00:25:30.615 "name": "BaseBdev1", 00:25:30.615 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:30.615 "is_configured": true, 00:25:30.615 "data_offset": 2048, 00:25:30.615 "data_size": 63488 00:25:30.615 }, 00:25:30.615 { 00:25:30.615 "name": "BaseBdev2", 00:25:30.615 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:30.615 "is_configured": true, 00:25:30.615 "data_offset": 2048, 00:25:30.615 "data_size": 63488 00:25:30.615 }, 00:25:30.615 { 00:25:30.615 "name": "BaseBdev3", 00:25:30.615 "uuid": "df57313c-428f-11ef-a0af-c98d8ee52a94", 00:25:30.615 "is_configured": true, 00:25:30.615 "data_offset": 2048, 00:25:30.615 "data_size": 63488 00:25:30.615 }, 00:25:30.615 { 00:25:30.615 "name": "BaseBdev4", 00:25:30.615 "uuid": "e005c2fc-428f-11ef-a0af-c98d8ee52a94", 00:25:30.615 "is_configured": true, 00:25:30.615 "data_offset": 2048, 00:25:30.615 "data_size": 63488 00:25:30.615 } 00:25:30.615 ] 00:25:30.615 }' 00:25:30.615 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.615 09:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:30.874 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:30.874 [2024-07-15 09:51:58.960614] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:31.134 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:31.134 "name": "Existed_Raid", 00:25:31.134 "aliases": [ 00:25:31.134 "de4dacc1-428f-11ef-a0af-c98d8ee52a94" 00:25:31.134 ], 00:25:31.134 "product_name": "Raid Volume", 00:25:31.134 "block_size": 512, 00:25:31.134 "num_blocks": 253952, 00:25:31.134 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "assigned_rate_limits": { 00:25:31.134 "rw_ios_per_sec": 0, 00:25:31.134 "rw_mbytes_per_sec": 0, 00:25:31.134 "r_mbytes_per_sec": 0, 00:25:31.134 "w_mbytes_per_sec": 0 00:25:31.134 }, 00:25:31.134 "claimed": false, 00:25:31.134 "zoned": false, 00:25:31.134 "supported_io_types": { 00:25:31.134 "read": true, 00:25:31.134 "write": true, 00:25:31.134 "unmap": true, 00:25:31.134 "flush": true, 00:25:31.134 "reset": true, 00:25:31.134 "nvme_admin": false, 00:25:31.134 "nvme_io": false, 00:25:31.134 "nvme_io_md": false, 00:25:31.134 "write_zeroes": true, 00:25:31.134 "zcopy": false, 00:25:31.134 "get_zone_info": false, 00:25:31.134 "zone_management": false, 00:25:31.134 "zone_append": false, 00:25:31.134 "compare": false, 00:25:31.134 "compare_and_write": false, 00:25:31.134 "abort": false, 00:25:31.134 "seek_hole": false, 00:25:31.134 "seek_data": false, 00:25:31.134 "copy": false, 00:25:31.134 "nvme_iov_md": false 00:25:31.134 }, 00:25:31.134 "memory_domains": [ 00:25:31.134 { 00:25:31.134 "dma_device_id": "system", 00:25:31.134 "dma_device_type": 1 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.134 "dma_device_type": 2 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "system", 00:25:31.134 "dma_device_type": 1 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.134 "dma_device_type": 2 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "system", 00:25:31.134 "dma_device_type": 1 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.134 "dma_device_type": 2 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "system", 00:25:31.134 "dma_device_type": 1 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.134 "dma_device_type": 2 00:25:31.134 } 00:25:31.134 ], 00:25:31.134 "driver_specific": { 00:25:31.134 "raid": { 00:25:31.134 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "strip_size_kb": 64, 00:25:31.134 "state": "online", 00:25:31.134 "raid_level": "concat", 00:25:31.134 "superblock": true, 00:25:31.134 "num_base_bdevs": 4, 00:25:31.134 "num_base_bdevs_discovered": 4, 00:25:31.134 "num_base_bdevs_operational": 4, 00:25:31.134 "base_bdevs_list": [ 00:25:31.134 { 00:25:31.134 "name": "BaseBdev1", 00:25:31.134 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "is_configured": true, 00:25:31.134 "data_offset": 2048, 00:25:31.134 "data_size": 63488 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "name": "BaseBdev2", 00:25:31.134 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "is_configured": true, 00:25:31.134 "data_offset": 2048, 00:25:31.134 "data_size": 63488 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "name": "BaseBdev3", 00:25:31.134 "uuid": "df57313c-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "is_configured": true, 00:25:31.134 "data_offset": 2048, 00:25:31.134 "data_size": 63488 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "name": "BaseBdev4", 00:25:31.134 "uuid": "e005c2fc-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "is_configured": true, 00:25:31.134 "data_offset": 2048, 00:25:31.134 "data_size": 63488 00:25:31.134 } 00:25:31.134 ] 00:25:31.134 } 00:25:31.134 } 00:25:31.134 }' 00:25:31.134 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:31.134 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:31.134 BaseBdev2 00:25:31.134 BaseBdev3 00:25:31.134 BaseBdev4' 00:25:31.134 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.134 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.134 09:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:31.134 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.134 "name": "BaseBdev1", 00:25:31.134 "aliases": [ 00:25:31.134 "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94" 00:25:31.134 ], 00:25:31.134 "product_name": "Malloc disk", 00:25:31.134 "block_size": 512, 00:25:31.134 "num_blocks": 65536, 00:25:31.134 "uuid": "dd7bb3d8-428f-11ef-a0af-c98d8ee52a94", 00:25:31.134 "assigned_rate_limits": { 00:25:31.134 "rw_ios_per_sec": 0, 00:25:31.134 "rw_mbytes_per_sec": 0, 00:25:31.134 "r_mbytes_per_sec": 0, 00:25:31.134 "w_mbytes_per_sec": 0 00:25:31.134 }, 00:25:31.134 "claimed": true, 00:25:31.134 "claim_type": "exclusive_write", 00:25:31.134 "zoned": false, 00:25:31.134 "supported_io_types": { 00:25:31.134 "read": true, 00:25:31.134 "write": true, 00:25:31.134 "unmap": true, 00:25:31.134 "flush": true, 00:25:31.134 "reset": true, 00:25:31.134 "nvme_admin": false, 00:25:31.134 "nvme_io": false, 00:25:31.134 "nvme_io_md": false, 00:25:31.134 "write_zeroes": true, 00:25:31.134 "zcopy": true, 00:25:31.134 "get_zone_info": false, 00:25:31.134 "zone_management": false, 00:25:31.134 "zone_append": false, 00:25:31.134 "compare": false, 00:25:31.134 "compare_and_write": false, 00:25:31.134 "abort": true, 00:25:31.134 "seek_hole": false, 00:25:31.134 "seek_data": false, 00:25:31.134 "copy": true, 00:25:31.134 "nvme_iov_md": false 00:25:31.134 }, 00:25:31.134 "memory_domains": [ 00:25:31.134 { 00:25:31.134 "dma_device_id": "system", 00:25:31.134 "dma_device_type": 1 00:25:31.134 }, 00:25:31.134 { 00:25:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.134 "dma_device_type": 2 00:25:31.134 } 00:25:31.134 ], 00:25:31.134 "driver_specific": {} 00:25:31.134 }' 00:25:31.134 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.134 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.134 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.134 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.134 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.394 "name": "BaseBdev2", 00:25:31.394 "aliases": [ 00:25:31.394 "deb2ff23-428f-11ef-a0af-c98d8ee52a94" 00:25:31.394 ], 00:25:31.394 "product_name": "Malloc disk", 00:25:31.394 "block_size": 512, 00:25:31.394 "num_blocks": 65536, 00:25:31.394 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:31.394 "assigned_rate_limits": { 00:25:31.394 "rw_ios_per_sec": 0, 00:25:31.394 "rw_mbytes_per_sec": 0, 00:25:31.394 "r_mbytes_per_sec": 0, 00:25:31.394 "w_mbytes_per_sec": 0 00:25:31.394 }, 00:25:31.394 "claimed": true, 00:25:31.394 "claim_type": "exclusive_write", 00:25:31.394 "zoned": false, 00:25:31.394 "supported_io_types": { 00:25:31.394 "read": true, 00:25:31.394 "write": true, 00:25:31.394 "unmap": true, 00:25:31.394 "flush": true, 00:25:31.394 "reset": true, 00:25:31.394 "nvme_admin": false, 00:25:31.394 "nvme_io": false, 00:25:31.394 "nvme_io_md": false, 00:25:31.394 "write_zeroes": true, 00:25:31.394 "zcopy": true, 00:25:31.394 "get_zone_info": false, 00:25:31.394 "zone_management": false, 00:25:31.394 "zone_append": false, 00:25:31.394 "compare": false, 00:25:31.394 "compare_and_write": false, 00:25:31.394 "abort": true, 00:25:31.394 "seek_hole": false, 00:25:31.394 "seek_data": false, 00:25:31.394 "copy": true, 00:25:31.394 "nvme_iov_md": false 00:25:31.394 }, 00:25:31.394 "memory_domains": [ 00:25:31.394 { 00:25:31.394 "dma_device_id": "system", 00:25:31.394 "dma_device_type": 1 00:25:31.394 }, 00:25:31.394 { 00:25:31.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.394 "dma_device_type": 2 00:25:31.394 } 00:25:31.394 ], 00:25:31.394 "driver_specific": {} 00:25:31.394 }' 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.394 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:31.654 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.913 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.913 "name": "BaseBdev3", 00:25:31.913 "aliases": [ 00:25:31.913 "df57313c-428f-11ef-a0af-c98d8ee52a94" 00:25:31.913 ], 00:25:31.913 "product_name": "Malloc disk", 00:25:31.913 "block_size": 512, 00:25:31.913 "num_blocks": 65536, 00:25:31.913 "uuid": "df57313c-428f-11ef-a0af-c98d8ee52a94", 00:25:31.913 "assigned_rate_limits": { 00:25:31.913 "rw_ios_per_sec": 0, 00:25:31.913 "rw_mbytes_per_sec": 0, 00:25:31.913 "r_mbytes_per_sec": 0, 00:25:31.913 "w_mbytes_per_sec": 0 00:25:31.913 }, 00:25:31.913 "claimed": true, 00:25:31.913 "claim_type": "exclusive_write", 00:25:31.913 "zoned": false, 00:25:31.913 "supported_io_types": { 00:25:31.913 "read": true, 00:25:31.913 "write": true, 00:25:31.913 "unmap": true, 00:25:31.913 "flush": true, 00:25:31.913 "reset": true, 00:25:31.913 "nvme_admin": false, 00:25:31.913 "nvme_io": false, 00:25:31.913 "nvme_io_md": false, 00:25:31.913 "write_zeroes": true, 00:25:31.913 "zcopy": true, 00:25:31.913 "get_zone_info": false, 00:25:31.913 "zone_management": false, 00:25:31.913 "zone_append": false, 00:25:31.913 "compare": false, 00:25:31.913 "compare_and_write": false, 00:25:31.913 "abort": true, 00:25:31.913 "seek_hole": false, 00:25:31.913 "seek_data": false, 00:25:31.913 "copy": true, 00:25:31.913 "nvme_iov_md": false 00:25:31.913 }, 00:25:31.913 "memory_domains": [ 00:25:31.913 { 00:25:31.913 "dma_device_id": "system", 00:25:31.913 "dma_device_type": 1 00:25:31.913 }, 00:25:31.913 { 00:25:31.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.913 "dma_device_type": 2 00:25:31.913 } 00:25:31.913 ], 00:25:31.913 "driver_specific": {} 00:25:31.913 }' 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.914 09:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.173 "name": "BaseBdev4", 00:25:32.173 "aliases": [ 00:25:32.173 "e005c2fc-428f-11ef-a0af-c98d8ee52a94" 00:25:32.173 ], 00:25:32.173 "product_name": "Malloc disk", 00:25:32.173 "block_size": 512, 00:25:32.173 "num_blocks": 65536, 00:25:32.173 "uuid": "e005c2fc-428f-11ef-a0af-c98d8ee52a94", 00:25:32.173 "assigned_rate_limits": { 00:25:32.173 "rw_ios_per_sec": 0, 00:25:32.173 "rw_mbytes_per_sec": 0, 00:25:32.173 "r_mbytes_per_sec": 0, 00:25:32.173 "w_mbytes_per_sec": 0 00:25:32.173 }, 00:25:32.173 "claimed": true, 00:25:32.173 "claim_type": "exclusive_write", 00:25:32.173 "zoned": false, 00:25:32.173 "supported_io_types": { 00:25:32.173 "read": true, 00:25:32.173 "write": true, 00:25:32.173 "unmap": true, 00:25:32.173 "flush": true, 00:25:32.173 "reset": true, 00:25:32.173 "nvme_admin": false, 00:25:32.173 "nvme_io": false, 00:25:32.173 "nvme_io_md": false, 00:25:32.173 "write_zeroes": true, 00:25:32.173 "zcopy": true, 00:25:32.173 "get_zone_info": false, 00:25:32.173 "zone_management": false, 00:25:32.173 "zone_append": false, 00:25:32.173 "compare": false, 00:25:32.173 "compare_and_write": false, 00:25:32.173 "abort": true, 00:25:32.173 "seek_hole": false, 00:25:32.173 "seek_data": false, 00:25:32.173 "copy": true, 00:25:32.173 "nvme_iov_md": false 00:25:32.173 }, 00:25:32.173 "memory_domains": [ 00:25:32.173 { 00:25:32.173 "dma_device_id": "system", 00:25:32.173 "dma_device_type": 1 00:25:32.173 }, 00:25:32.173 { 00:25:32.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.173 "dma_device_type": 2 00:25:32.173 } 00:25:32.173 ], 00:25:32.173 "driver_specific": {} 00:25:32.173 }' 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.173 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:32.432 [2024-07-15 09:52:00.300643] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:32.432 [2024-07-15 09:52:00.300667] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:32.432 [2024-07-15 09:52:00.300680] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:32.432 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.433 "name": "Existed_Raid", 00:25:32.433 "uuid": "de4dacc1-428f-11ef-a0af-c98d8ee52a94", 00:25:32.433 "strip_size_kb": 64, 00:25:32.433 "state": "offline", 00:25:32.433 "raid_level": "concat", 00:25:32.433 "superblock": true, 00:25:32.433 "num_base_bdevs": 4, 00:25:32.433 "num_base_bdevs_discovered": 3, 00:25:32.433 "num_base_bdevs_operational": 3, 00:25:32.433 "base_bdevs_list": [ 00:25:32.433 { 00:25:32.433 "name": null, 00:25:32.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.433 "is_configured": false, 00:25:32.433 "data_offset": 2048, 00:25:32.433 "data_size": 63488 00:25:32.433 }, 00:25:32.433 { 00:25:32.433 "name": "BaseBdev2", 00:25:32.433 "uuid": "deb2ff23-428f-11ef-a0af-c98d8ee52a94", 00:25:32.433 "is_configured": true, 00:25:32.433 "data_offset": 2048, 00:25:32.433 "data_size": 63488 00:25:32.433 }, 00:25:32.433 { 00:25:32.433 "name": "BaseBdev3", 00:25:32.433 "uuid": "df57313c-428f-11ef-a0af-c98d8ee52a94", 00:25:32.433 "is_configured": true, 00:25:32.433 "data_offset": 2048, 00:25:32.433 "data_size": 63488 00:25:32.433 }, 00:25:32.433 { 00:25:32.433 "name": "BaseBdev4", 00:25:32.433 "uuid": "e005c2fc-428f-11ef-a0af-c98d8ee52a94", 00:25:32.433 "is_configured": true, 00:25:32.433 "data_offset": 2048, 00:25:32.433 "data_size": 63488 00:25:32.433 } 00:25:32.433 ] 00:25:32.433 }' 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.433 09:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.692 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:32.692 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:32.692 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.692 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:32.951 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:32.951 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:32.951 09:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:33.210 [2024-07-15 09:52:01.169142] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:33.210 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:33.210 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:33.210 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.210 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:33.472 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:33.472 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:33.472 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:33.731 [2024-07-15 09:52:01.577616] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:33.731 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:33.989 [2024-07-15 09:52:01.974069] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:33.990 [2024-07-15 09:52:01.974099] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdabbfa34a00 name Existed_Raid, state offline 00:25:33.990 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:33.990 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:33.990 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:33.990 09:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.248 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:34.248 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:34.248 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:34.248 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:34.248 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:34.248 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:34.508 BaseBdev2 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:34.508 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:34.768 [ 00:25:34.768 { 00:25:34.768 "name": "BaseBdev2", 00:25:34.768 "aliases": [ 00:25:34.768 "e2bf179e-428f-11ef-a0af-c98d8ee52a94" 00:25:34.768 ], 00:25:34.768 "product_name": "Malloc disk", 00:25:34.768 "block_size": 512, 00:25:34.768 "num_blocks": 65536, 00:25:34.768 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:34.768 "assigned_rate_limits": { 00:25:34.768 "rw_ios_per_sec": 0, 00:25:34.768 "rw_mbytes_per_sec": 0, 00:25:34.768 "r_mbytes_per_sec": 0, 00:25:34.768 "w_mbytes_per_sec": 0 00:25:34.768 }, 00:25:34.768 "claimed": false, 00:25:34.768 "zoned": false, 00:25:34.768 "supported_io_types": { 00:25:34.768 "read": true, 00:25:34.768 "write": true, 00:25:34.768 "unmap": true, 00:25:34.768 "flush": true, 00:25:34.768 "reset": true, 00:25:34.768 "nvme_admin": false, 00:25:34.768 "nvme_io": false, 00:25:34.768 "nvme_io_md": false, 00:25:34.768 "write_zeroes": true, 00:25:34.768 "zcopy": true, 00:25:34.768 "get_zone_info": false, 00:25:34.768 "zone_management": false, 00:25:34.768 "zone_append": false, 00:25:34.768 "compare": false, 00:25:34.768 "compare_and_write": false, 00:25:34.768 "abort": true, 00:25:34.768 "seek_hole": false, 00:25:34.768 "seek_data": false, 00:25:34.768 "copy": true, 00:25:34.768 "nvme_iov_md": false 00:25:34.768 }, 00:25:34.768 "memory_domains": [ 00:25:34.768 { 00:25:34.768 "dma_device_id": "system", 00:25:34.768 "dma_device_type": 1 00:25:34.768 }, 00:25:34.768 { 00:25:34.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.768 "dma_device_type": 2 00:25:34.768 } 00:25:34.768 ], 00:25:34.768 "driver_specific": {} 00:25:34.768 } 00:25:34.768 ] 00:25:34.768 09:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:34.768 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:34.768 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:34.768 09:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:35.027 BaseBdev3 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:35.027 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:35.284 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:35.543 [ 00:25:35.543 { 00:25:35.543 "name": "BaseBdev3", 00:25:35.543 "aliases": [ 00:25:35.543 "e321f997-428f-11ef-a0af-c98d8ee52a94" 00:25:35.543 ], 00:25:35.543 "product_name": "Malloc disk", 00:25:35.543 "block_size": 512, 00:25:35.543 "num_blocks": 65536, 00:25:35.543 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:35.543 "assigned_rate_limits": { 00:25:35.543 "rw_ios_per_sec": 0, 00:25:35.543 "rw_mbytes_per_sec": 0, 00:25:35.543 "r_mbytes_per_sec": 0, 00:25:35.543 "w_mbytes_per_sec": 0 00:25:35.543 }, 00:25:35.543 "claimed": false, 00:25:35.543 "zoned": false, 00:25:35.543 "supported_io_types": { 00:25:35.543 "read": true, 00:25:35.543 "write": true, 00:25:35.543 "unmap": true, 00:25:35.543 "flush": true, 00:25:35.544 "reset": true, 00:25:35.544 "nvme_admin": false, 00:25:35.544 "nvme_io": false, 00:25:35.544 "nvme_io_md": false, 00:25:35.544 "write_zeroes": true, 00:25:35.544 "zcopy": true, 00:25:35.544 "get_zone_info": false, 00:25:35.544 "zone_management": false, 00:25:35.544 "zone_append": false, 00:25:35.544 "compare": false, 00:25:35.544 "compare_and_write": false, 00:25:35.544 "abort": true, 00:25:35.544 "seek_hole": false, 00:25:35.544 "seek_data": false, 00:25:35.544 "copy": true, 00:25:35.544 "nvme_iov_md": false 00:25:35.544 }, 00:25:35.544 "memory_domains": [ 00:25:35.544 { 00:25:35.544 "dma_device_id": "system", 00:25:35.544 "dma_device_type": 1 00:25:35.544 }, 00:25:35.544 { 00:25:35.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.544 "dma_device_type": 2 00:25:35.544 } 00:25:35.544 ], 00:25:35.544 "driver_specific": {} 00:25:35.544 } 00:25:35.544 ] 00:25:35.544 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:35.544 09:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:35.544 09:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:35.544 09:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:35.544 BaseBdev4 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:35.803 09:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:36.060 [ 00:25:36.060 { 00:25:36.060 "name": "BaseBdev4", 00:25:36.060 "aliases": [ 00:25:36.060 "e37f5d71-428f-11ef-a0af-c98d8ee52a94" 00:25:36.060 ], 00:25:36.060 "product_name": "Malloc disk", 00:25:36.060 "block_size": 512, 00:25:36.060 "num_blocks": 65536, 00:25:36.060 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:36.060 "assigned_rate_limits": { 00:25:36.060 "rw_ios_per_sec": 0, 00:25:36.060 "rw_mbytes_per_sec": 0, 00:25:36.060 "r_mbytes_per_sec": 0, 00:25:36.060 "w_mbytes_per_sec": 0 00:25:36.060 }, 00:25:36.060 "claimed": false, 00:25:36.060 "zoned": false, 00:25:36.060 "supported_io_types": { 00:25:36.060 "read": true, 00:25:36.060 "write": true, 00:25:36.060 "unmap": true, 00:25:36.060 "flush": true, 00:25:36.060 "reset": true, 00:25:36.060 "nvme_admin": false, 00:25:36.060 "nvme_io": false, 00:25:36.060 "nvme_io_md": false, 00:25:36.060 "write_zeroes": true, 00:25:36.060 "zcopy": true, 00:25:36.060 "get_zone_info": false, 00:25:36.060 "zone_management": false, 00:25:36.060 "zone_append": false, 00:25:36.060 "compare": false, 00:25:36.060 "compare_and_write": false, 00:25:36.060 "abort": true, 00:25:36.060 "seek_hole": false, 00:25:36.060 "seek_data": false, 00:25:36.060 "copy": true, 00:25:36.060 "nvme_iov_md": false 00:25:36.060 }, 00:25:36.060 "memory_domains": [ 00:25:36.060 { 00:25:36.060 "dma_device_id": "system", 00:25:36.060 "dma_device_type": 1 00:25:36.060 }, 00:25:36.060 { 00:25:36.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.060 "dma_device_type": 2 00:25:36.060 } 00:25:36.060 ], 00:25:36.060 "driver_specific": {} 00:25:36.060 } 00:25:36.060 ] 00:25:36.060 09:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:36.060 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:36.060 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:36.060 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:36.317 [2024-07-15 09:52:04.190684] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:36.317 [2024-07-15 09:52:04.190748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:36.317 [2024-07-15 09:52:04.190755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:36.317 [2024-07-15 09:52:04.191440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:36.317 [2024-07-15 09:52:04.191462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.317 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.318 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.318 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.318 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.318 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.318 "name": "Existed_Raid", 00:25:36.318 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:36.318 "strip_size_kb": 64, 00:25:36.318 "state": "configuring", 00:25:36.318 "raid_level": "concat", 00:25:36.318 "superblock": true, 00:25:36.318 "num_base_bdevs": 4, 00:25:36.318 "num_base_bdevs_discovered": 3, 00:25:36.318 "num_base_bdevs_operational": 4, 00:25:36.318 "base_bdevs_list": [ 00:25:36.318 { 00:25:36.318 "name": "BaseBdev1", 00:25:36.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.318 "is_configured": false, 00:25:36.318 "data_offset": 0, 00:25:36.318 "data_size": 0 00:25:36.318 }, 00:25:36.318 { 00:25:36.318 "name": "BaseBdev2", 00:25:36.318 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:36.318 "is_configured": true, 00:25:36.318 "data_offset": 2048, 00:25:36.318 "data_size": 63488 00:25:36.318 }, 00:25:36.318 { 00:25:36.318 "name": "BaseBdev3", 00:25:36.318 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:36.318 "is_configured": true, 00:25:36.318 "data_offset": 2048, 00:25:36.318 "data_size": 63488 00:25:36.318 }, 00:25:36.318 { 00:25:36.318 "name": "BaseBdev4", 00:25:36.318 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:36.318 "is_configured": true, 00:25:36.318 "data_offset": 2048, 00:25:36.318 "data_size": 63488 00:25:36.318 } 00:25:36.318 ] 00:25:36.318 }' 00:25:36.318 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.585 09:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:36.844 [2024-07-15 09:52:04.906711] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.844 09:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.102 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.102 "name": "Existed_Raid", 00:25:37.102 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:37.102 "strip_size_kb": 64, 00:25:37.102 "state": "configuring", 00:25:37.102 "raid_level": "concat", 00:25:37.102 "superblock": true, 00:25:37.102 "num_base_bdevs": 4, 00:25:37.102 "num_base_bdevs_discovered": 2, 00:25:37.102 "num_base_bdevs_operational": 4, 00:25:37.102 "base_bdevs_list": [ 00:25:37.102 { 00:25:37.102 "name": "BaseBdev1", 00:25:37.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.102 "is_configured": false, 00:25:37.102 "data_offset": 0, 00:25:37.102 "data_size": 0 00:25:37.102 }, 00:25:37.102 { 00:25:37.102 "name": null, 00:25:37.102 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:37.102 "is_configured": false, 00:25:37.102 "data_offset": 2048, 00:25:37.102 "data_size": 63488 00:25:37.102 }, 00:25:37.102 { 00:25:37.102 "name": "BaseBdev3", 00:25:37.102 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:37.102 "is_configured": true, 00:25:37.102 "data_offset": 2048, 00:25:37.102 "data_size": 63488 00:25:37.102 }, 00:25:37.102 { 00:25:37.102 "name": "BaseBdev4", 00:25:37.102 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:37.102 "is_configured": true, 00:25:37.102 "data_offset": 2048, 00:25:37.102 "data_size": 63488 00:25:37.102 } 00:25:37.102 ] 00:25:37.102 }' 00:25:37.102 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.102 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.361 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.361 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:37.640 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:37.641 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:37.899 [2024-07-15 09:52:05.850891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:37.899 BaseBdev1 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:37.899 09:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:38.157 09:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:38.414 [ 00:25:38.414 { 00:25:38.414 "name": "BaseBdev1", 00:25:38.414 "aliases": [ 00:25:38.414 "e4d221b7-428f-11ef-a0af-c98d8ee52a94" 00:25:38.414 ], 00:25:38.414 "product_name": "Malloc disk", 00:25:38.414 "block_size": 512, 00:25:38.414 "num_blocks": 65536, 00:25:38.414 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:38.414 "assigned_rate_limits": { 00:25:38.414 "rw_ios_per_sec": 0, 00:25:38.414 "rw_mbytes_per_sec": 0, 00:25:38.414 "r_mbytes_per_sec": 0, 00:25:38.414 "w_mbytes_per_sec": 0 00:25:38.414 }, 00:25:38.414 "claimed": true, 00:25:38.414 "claim_type": "exclusive_write", 00:25:38.414 "zoned": false, 00:25:38.414 "supported_io_types": { 00:25:38.414 "read": true, 00:25:38.414 "write": true, 00:25:38.414 "unmap": true, 00:25:38.414 "flush": true, 00:25:38.414 "reset": true, 00:25:38.414 "nvme_admin": false, 00:25:38.414 "nvme_io": false, 00:25:38.414 "nvme_io_md": false, 00:25:38.414 "write_zeroes": true, 00:25:38.414 "zcopy": true, 00:25:38.414 "get_zone_info": false, 00:25:38.414 "zone_management": false, 00:25:38.414 "zone_append": false, 00:25:38.414 "compare": false, 00:25:38.414 "compare_and_write": false, 00:25:38.414 "abort": true, 00:25:38.414 "seek_hole": false, 00:25:38.414 "seek_data": false, 00:25:38.414 "copy": true, 00:25:38.414 "nvme_iov_md": false 00:25:38.414 }, 00:25:38.414 "memory_domains": [ 00:25:38.414 { 00:25:38.414 "dma_device_id": "system", 00:25:38.414 "dma_device_type": 1 00:25:38.414 }, 00:25:38.414 { 00:25:38.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.415 "dma_device_type": 2 00:25:38.415 } 00:25:38.415 ], 00:25:38.415 "driver_specific": {} 00:25:38.415 } 00:25:38.415 ] 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.415 "name": "Existed_Raid", 00:25:38.415 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:38.415 "strip_size_kb": 64, 00:25:38.415 "state": "configuring", 00:25:38.415 "raid_level": "concat", 00:25:38.415 "superblock": true, 00:25:38.415 "num_base_bdevs": 4, 00:25:38.415 "num_base_bdevs_discovered": 3, 00:25:38.415 "num_base_bdevs_operational": 4, 00:25:38.415 "base_bdevs_list": [ 00:25:38.415 { 00:25:38.415 "name": "BaseBdev1", 00:25:38.415 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:38.415 "is_configured": true, 00:25:38.415 "data_offset": 2048, 00:25:38.415 "data_size": 63488 00:25:38.415 }, 00:25:38.415 { 00:25:38.415 "name": null, 00:25:38.415 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:38.415 "is_configured": false, 00:25:38.415 "data_offset": 2048, 00:25:38.415 "data_size": 63488 00:25:38.415 }, 00:25:38.415 { 00:25:38.415 "name": "BaseBdev3", 00:25:38.415 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:38.415 "is_configured": true, 00:25:38.415 "data_offset": 2048, 00:25:38.415 "data_size": 63488 00:25:38.415 }, 00:25:38.415 { 00:25:38.415 "name": "BaseBdev4", 00:25:38.415 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:38.415 "is_configured": true, 00:25:38.415 "data_offset": 2048, 00:25:38.415 "data_size": 63488 00:25:38.415 } 00:25:38.415 ] 00:25:38.415 }' 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.415 09:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.980 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.980 09:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:38.980 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:38.980 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:39.239 [2024-07-15 09:52:07.182823] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.239 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.498 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.498 "name": "Existed_Raid", 00:25:39.498 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:39.498 "strip_size_kb": 64, 00:25:39.498 "state": "configuring", 00:25:39.498 "raid_level": "concat", 00:25:39.498 "superblock": true, 00:25:39.498 "num_base_bdevs": 4, 00:25:39.498 "num_base_bdevs_discovered": 2, 00:25:39.498 "num_base_bdevs_operational": 4, 00:25:39.498 "base_bdevs_list": [ 00:25:39.498 { 00:25:39.498 "name": "BaseBdev1", 00:25:39.498 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:39.498 "is_configured": true, 00:25:39.498 "data_offset": 2048, 00:25:39.498 "data_size": 63488 00:25:39.498 }, 00:25:39.498 { 00:25:39.498 "name": null, 00:25:39.498 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:39.498 "is_configured": false, 00:25:39.498 "data_offset": 2048, 00:25:39.498 "data_size": 63488 00:25:39.498 }, 00:25:39.498 { 00:25:39.498 "name": null, 00:25:39.498 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:39.498 "is_configured": false, 00:25:39.498 "data_offset": 2048, 00:25:39.498 "data_size": 63488 00:25:39.498 }, 00:25:39.498 { 00:25:39.498 "name": "BaseBdev4", 00:25:39.498 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:39.498 "is_configured": true, 00:25:39.498 "data_offset": 2048, 00:25:39.498 "data_size": 63488 00:25:39.498 } 00:25:39.498 ] 00:25:39.498 }' 00:25:39.498 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.499 09:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.757 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.757 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:40.016 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:40.016 09:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:40.275 [2024-07-15 09:52:08.126886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.275 "name": "Existed_Raid", 00:25:40.275 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:40.275 "strip_size_kb": 64, 00:25:40.275 "state": "configuring", 00:25:40.275 "raid_level": "concat", 00:25:40.275 "superblock": true, 00:25:40.275 "num_base_bdevs": 4, 00:25:40.275 "num_base_bdevs_discovered": 3, 00:25:40.275 "num_base_bdevs_operational": 4, 00:25:40.275 "base_bdevs_list": [ 00:25:40.275 { 00:25:40.275 "name": "BaseBdev1", 00:25:40.275 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:40.275 "is_configured": true, 00:25:40.275 "data_offset": 2048, 00:25:40.275 "data_size": 63488 00:25:40.275 }, 00:25:40.275 { 00:25:40.275 "name": null, 00:25:40.275 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:40.275 "is_configured": false, 00:25:40.275 "data_offset": 2048, 00:25:40.275 "data_size": 63488 00:25:40.275 }, 00:25:40.275 { 00:25:40.275 "name": "BaseBdev3", 00:25:40.275 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:40.275 "is_configured": true, 00:25:40.275 "data_offset": 2048, 00:25:40.275 "data_size": 63488 00:25:40.275 }, 00:25:40.275 { 00:25:40.275 "name": "BaseBdev4", 00:25:40.275 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:40.275 "is_configured": true, 00:25:40.275 "data_offset": 2048, 00:25:40.275 "data_size": 63488 00:25:40.275 } 00:25:40.275 ] 00:25:40.275 }' 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.275 09:52:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.843 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.843 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:40.843 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:40.843 09:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:41.102 [2024-07-15 09:52:09.062941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.102 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.103 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.103 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.103 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.103 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.103 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.360 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.360 "name": "Existed_Raid", 00:25:41.360 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:41.360 "strip_size_kb": 64, 00:25:41.360 "state": "configuring", 00:25:41.360 "raid_level": "concat", 00:25:41.360 "superblock": true, 00:25:41.360 "num_base_bdevs": 4, 00:25:41.360 "num_base_bdevs_discovered": 2, 00:25:41.360 "num_base_bdevs_operational": 4, 00:25:41.360 "base_bdevs_list": [ 00:25:41.360 { 00:25:41.360 "name": null, 00:25:41.360 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:41.360 "is_configured": false, 00:25:41.360 "data_offset": 2048, 00:25:41.360 "data_size": 63488 00:25:41.360 }, 00:25:41.360 { 00:25:41.361 "name": null, 00:25:41.361 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:41.361 "is_configured": false, 00:25:41.361 "data_offset": 2048, 00:25:41.361 "data_size": 63488 00:25:41.361 }, 00:25:41.361 { 00:25:41.361 "name": "BaseBdev3", 00:25:41.361 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:41.361 "is_configured": true, 00:25:41.361 "data_offset": 2048, 00:25:41.361 "data_size": 63488 00:25:41.361 }, 00:25:41.361 { 00:25:41.361 "name": "BaseBdev4", 00:25:41.361 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:41.361 "is_configured": true, 00:25:41.361 "data_offset": 2048, 00:25:41.361 "data_size": 63488 00:25:41.361 } 00:25:41.361 ] 00:25:41.361 }' 00:25:41.361 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.361 09:52:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.619 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.619 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:41.879 [2024-07-15 09:52:09.924086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.879 09:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.139 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.139 "name": "Existed_Raid", 00:25:42.139 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:42.139 "strip_size_kb": 64, 00:25:42.139 "state": "configuring", 00:25:42.139 "raid_level": "concat", 00:25:42.139 "superblock": true, 00:25:42.139 "num_base_bdevs": 4, 00:25:42.139 "num_base_bdevs_discovered": 3, 00:25:42.139 "num_base_bdevs_operational": 4, 00:25:42.139 "base_bdevs_list": [ 00:25:42.139 { 00:25:42.139 "name": null, 00:25:42.139 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:42.139 "is_configured": false, 00:25:42.139 "data_offset": 2048, 00:25:42.139 "data_size": 63488 00:25:42.139 }, 00:25:42.139 { 00:25:42.139 "name": "BaseBdev2", 00:25:42.139 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:42.139 "is_configured": true, 00:25:42.139 "data_offset": 2048, 00:25:42.139 "data_size": 63488 00:25:42.139 }, 00:25:42.139 { 00:25:42.139 "name": "BaseBdev3", 00:25:42.139 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:42.139 "is_configured": true, 00:25:42.139 "data_offset": 2048, 00:25:42.139 "data_size": 63488 00:25:42.139 }, 00:25:42.139 { 00:25:42.139 "name": "BaseBdev4", 00:25:42.139 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:42.139 "is_configured": true, 00:25:42.139 "data_offset": 2048, 00:25:42.139 "data_size": 63488 00:25:42.139 } 00:25:42.139 ] 00:25:42.139 }' 00:25:42.139 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.139 09:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.398 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.398 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:42.657 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:42.657 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.657 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:42.917 09:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e4d221b7-428f-11ef-a0af-c98d8ee52a94 00:25:42.917 [2024-07-15 09:52:11.016261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:42.917 [2024-07-15 09:52:11.016310] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdabbfa34f00 00:25:42.917 [2024-07-15 09:52:11.016314] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:42.917 [2024-07-15 09:52:11.016331] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdabbfa97e20 00:25:42.917 [2024-07-15 09:52:11.016368] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdabbfa34f00 00:25:42.917 [2024-07-15 09:52:11.016371] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xdabbfa34f00 00:25:42.917 [2024-07-15 09:52:11.016387] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.176 NewBaseBdev 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:43.176 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:43.435 [ 00:25:43.435 { 00:25:43.435 "name": "NewBaseBdev", 00:25:43.435 "aliases": [ 00:25:43.435 "e4d221b7-428f-11ef-a0af-c98d8ee52a94" 00:25:43.435 ], 00:25:43.435 "product_name": "Malloc disk", 00:25:43.435 "block_size": 512, 00:25:43.435 "num_blocks": 65536, 00:25:43.435 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:43.435 "assigned_rate_limits": { 00:25:43.435 "rw_ios_per_sec": 0, 00:25:43.435 "rw_mbytes_per_sec": 0, 00:25:43.435 "r_mbytes_per_sec": 0, 00:25:43.435 "w_mbytes_per_sec": 0 00:25:43.435 }, 00:25:43.435 "claimed": true, 00:25:43.435 "claim_type": "exclusive_write", 00:25:43.435 "zoned": false, 00:25:43.435 "supported_io_types": { 00:25:43.435 "read": true, 00:25:43.435 "write": true, 00:25:43.435 "unmap": true, 00:25:43.435 "flush": true, 00:25:43.435 "reset": true, 00:25:43.435 "nvme_admin": false, 00:25:43.435 "nvme_io": false, 00:25:43.435 "nvme_io_md": false, 00:25:43.435 "write_zeroes": true, 00:25:43.435 "zcopy": true, 00:25:43.435 "get_zone_info": false, 00:25:43.435 "zone_management": false, 00:25:43.435 "zone_append": false, 00:25:43.435 "compare": false, 00:25:43.435 "compare_and_write": false, 00:25:43.435 "abort": true, 00:25:43.435 "seek_hole": false, 00:25:43.435 "seek_data": false, 00:25:43.435 "copy": true, 00:25:43.435 "nvme_iov_md": false 00:25:43.435 }, 00:25:43.435 "memory_domains": [ 00:25:43.435 { 00:25:43.435 "dma_device_id": "system", 00:25:43.435 "dma_device_type": 1 00:25:43.435 }, 00:25:43.435 { 00:25:43.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.435 "dma_device_type": 2 00:25:43.435 } 00:25:43.435 ], 00:25:43.435 "driver_specific": {} 00:25:43.435 } 00:25:43.435 ] 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.435 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.694 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.694 "name": "Existed_Raid", 00:25:43.694 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:43.694 "strip_size_kb": 64, 00:25:43.694 "state": "online", 00:25:43.694 "raid_level": "concat", 00:25:43.694 "superblock": true, 00:25:43.694 "num_base_bdevs": 4, 00:25:43.694 "num_base_bdevs_discovered": 4, 00:25:43.694 "num_base_bdevs_operational": 4, 00:25:43.694 "base_bdevs_list": [ 00:25:43.694 { 00:25:43.694 "name": "NewBaseBdev", 00:25:43.694 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:43.694 "is_configured": true, 00:25:43.694 "data_offset": 2048, 00:25:43.694 "data_size": 63488 00:25:43.694 }, 00:25:43.694 { 00:25:43.694 "name": "BaseBdev2", 00:25:43.694 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:43.694 "is_configured": true, 00:25:43.694 "data_offset": 2048, 00:25:43.694 "data_size": 63488 00:25:43.694 }, 00:25:43.694 { 00:25:43.694 "name": "BaseBdev3", 00:25:43.694 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:43.694 "is_configured": true, 00:25:43.694 "data_offset": 2048, 00:25:43.694 "data_size": 63488 00:25:43.694 }, 00:25:43.694 { 00:25:43.694 "name": "BaseBdev4", 00:25:43.694 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:43.694 "is_configured": true, 00:25:43.694 "data_offset": 2048, 00:25:43.694 "data_size": 63488 00:25:43.694 } 00:25:43.694 ] 00:25:43.694 }' 00:25:43.694 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.694 09:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:43.954 09:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:44.213 [2024-07-15 09:52:12.076223] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:44.213 "name": "Existed_Raid", 00:25:44.213 "aliases": [ 00:25:44.213 "e3d4d257-428f-11ef-a0af-c98d8ee52a94" 00:25:44.213 ], 00:25:44.213 "product_name": "Raid Volume", 00:25:44.213 "block_size": 512, 00:25:44.213 "num_blocks": 253952, 00:25:44.213 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "assigned_rate_limits": { 00:25:44.213 "rw_ios_per_sec": 0, 00:25:44.213 "rw_mbytes_per_sec": 0, 00:25:44.213 "r_mbytes_per_sec": 0, 00:25:44.213 "w_mbytes_per_sec": 0 00:25:44.213 }, 00:25:44.213 "claimed": false, 00:25:44.213 "zoned": false, 00:25:44.213 "supported_io_types": { 00:25:44.213 "read": true, 00:25:44.213 "write": true, 00:25:44.213 "unmap": true, 00:25:44.213 "flush": true, 00:25:44.213 "reset": true, 00:25:44.213 "nvme_admin": false, 00:25:44.213 "nvme_io": false, 00:25:44.213 "nvme_io_md": false, 00:25:44.213 "write_zeroes": true, 00:25:44.213 "zcopy": false, 00:25:44.213 "get_zone_info": false, 00:25:44.213 "zone_management": false, 00:25:44.213 "zone_append": false, 00:25:44.213 "compare": false, 00:25:44.213 "compare_and_write": false, 00:25:44.213 "abort": false, 00:25:44.213 "seek_hole": false, 00:25:44.213 "seek_data": false, 00:25:44.213 "copy": false, 00:25:44.213 "nvme_iov_md": false 00:25:44.213 }, 00:25:44.213 "memory_domains": [ 00:25:44.213 { 00:25:44.213 "dma_device_id": "system", 00:25:44.213 "dma_device_type": 1 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.213 "dma_device_type": 2 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "system", 00:25:44.213 "dma_device_type": 1 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.213 "dma_device_type": 2 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "system", 00:25:44.213 "dma_device_type": 1 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.213 "dma_device_type": 2 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "system", 00:25:44.213 "dma_device_type": 1 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.213 "dma_device_type": 2 00:25:44.213 } 00:25:44.213 ], 00:25:44.213 "driver_specific": { 00:25:44.213 "raid": { 00:25:44.213 "uuid": "e3d4d257-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "strip_size_kb": 64, 00:25:44.213 "state": "online", 00:25:44.213 "raid_level": "concat", 00:25:44.213 "superblock": true, 00:25:44.213 "num_base_bdevs": 4, 00:25:44.213 "num_base_bdevs_discovered": 4, 00:25:44.213 "num_base_bdevs_operational": 4, 00:25:44.213 "base_bdevs_list": [ 00:25:44.213 { 00:25:44.213 "name": "NewBaseBdev", 00:25:44.213 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "is_configured": true, 00:25:44.213 "data_offset": 2048, 00:25:44.213 "data_size": 63488 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "name": "BaseBdev2", 00:25:44.213 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "is_configured": true, 00:25:44.213 "data_offset": 2048, 00:25:44.213 "data_size": 63488 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "name": "BaseBdev3", 00:25:44.213 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "is_configured": true, 00:25:44.213 "data_offset": 2048, 00:25:44.213 "data_size": 63488 00:25:44.213 }, 00:25:44.213 { 00:25:44.213 "name": "BaseBdev4", 00:25:44.213 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "is_configured": true, 00:25:44.213 "data_offset": 2048, 00:25:44.213 "data_size": 63488 00:25:44.213 } 00:25:44.213 ] 00:25:44.213 } 00:25:44.213 } 00:25:44.213 }' 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:44.213 BaseBdev2 00:25:44.213 BaseBdev3 00:25:44.213 BaseBdev4' 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.213 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.213 "name": "NewBaseBdev", 00:25:44.213 "aliases": [ 00:25:44.213 "e4d221b7-428f-11ef-a0af-c98d8ee52a94" 00:25:44.213 ], 00:25:44.213 "product_name": "Malloc disk", 00:25:44.213 "block_size": 512, 00:25:44.213 "num_blocks": 65536, 00:25:44.213 "uuid": "e4d221b7-428f-11ef-a0af-c98d8ee52a94", 00:25:44.213 "assigned_rate_limits": { 00:25:44.213 "rw_ios_per_sec": 0, 00:25:44.213 "rw_mbytes_per_sec": 0, 00:25:44.213 "r_mbytes_per_sec": 0, 00:25:44.213 "w_mbytes_per_sec": 0 00:25:44.213 }, 00:25:44.213 "claimed": true, 00:25:44.213 "claim_type": "exclusive_write", 00:25:44.213 "zoned": false, 00:25:44.213 "supported_io_types": { 00:25:44.213 "read": true, 00:25:44.213 "write": true, 00:25:44.213 "unmap": true, 00:25:44.213 "flush": true, 00:25:44.213 "reset": true, 00:25:44.213 "nvme_admin": false, 00:25:44.213 "nvme_io": false, 00:25:44.213 "nvme_io_md": false, 00:25:44.213 "write_zeroes": true, 00:25:44.213 "zcopy": true, 00:25:44.213 "get_zone_info": false, 00:25:44.213 "zone_management": false, 00:25:44.213 "zone_append": false, 00:25:44.213 "compare": false, 00:25:44.213 "compare_and_write": false, 00:25:44.213 "abort": true, 00:25:44.213 "seek_hole": false, 00:25:44.213 "seek_data": false, 00:25:44.213 "copy": true, 00:25:44.213 "nvme_iov_md": false 00:25:44.213 }, 00:25:44.213 "memory_domains": [ 00:25:44.214 { 00:25:44.214 "dma_device_id": "system", 00:25:44.214 "dma_device_type": 1 00:25:44.214 }, 00:25:44.214 { 00:25:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.214 "dma_device_type": 2 00:25:44.214 } 00:25:44.214 ], 00:25:44.214 "driver_specific": {} 00:25:44.214 }' 00:25:44.214 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.472 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.473 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.473 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.473 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.473 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:44.473 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.731 "name": "BaseBdev2", 00:25:44.731 "aliases": [ 00:25:44.731 "e2bf179e-428f-11ef-a0af-c98d8ee52a94" 00:25:44.731 ], 00:25:44.731 "product_name": "Malloc disk", 00:25:44.731 "block_size": 512, 00:25:44.731 "num_blocks": 65536, 00:25:44.731 "uuid": "e2bf179e-428f-11ef-a0af-c98d8ee52a94", 00:25:44.731 "assigned_rate_limits": { 00:25:44.731 "rw_ios_per_sec": 0, 00:25:44.731 "rw_mbytes_per_sec": 0, 00:25:44.731 "r_mbytes_per_sec": 0, 00:25:44.731 "w_mbytes_per_sec": 0 00:25:44.731 }, 00:25:44.731 "claimed": true, 00:25:44.731 "claim_type": "exclusive_write", 00:25:44.731 "zoned": false, 00:25:44.731 "supported_io_types": { 00:25:44.731 "read": true, 00:25:44.731 "write": true, 00:25:44.731 "unmap": true, 00:25:44.731 "flush": true, 00:25:44.731 "reset": true, 00:25:44.731 "nvme_admin": false, 00:25:44.731 "nvme_io": false, 00:25:44.731 "nvme_io_md": false, 00:25:44.731 "write_zeroes": true, 00:25:44.731 "zcopy": true, 00:25:44.731 "get_zone_info": false, 00:25:44.731 "zone_management": false, 00:25:44.731 "zone_append": false, 00:25:44.731 "compare": false, 00:25:44.731 "compare_and_write": false, 00:25:44.731 "abort": true, 00:25:44.731 "seek_hole": false, 00:25:44.731 "seek_data": false, 00:25:44.731 "copy": true, 00:25:44.731 "nvme_iov_md": false 00:25:44.731 }, 00:25:44.731 "memory_domains": [ 00:25:44.731 { 00:25:44.731 "dma_device_id": "system", 00:25:44.731 "dma_device_type": 1 00:25:44.731 }, 00:25:44.731 { 00:25:44.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.731 "dma_device_type": 2 00:25:44.731 } 00:25:44.731 ], 00:25:44.731 "driver_specific": {} 00:25:44.731 }' 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:44.731 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.990 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.990 "name": "BaseBdev3", 00:25:44.990 "aliases": [ 00:25:44.991 "e321f997-428f-11ef-a0af-c98d8ee52a94" 00:25:44.991 ], 00:25:44.991 "product_name": "Malloc disk", 00:25:44.991 "block_size": 512, 00:25:44.991 "num_blocks": 65536, 00:25:44.991 "uuid": "e321f997-428f-11ef-a0af-c98d8ee52a94", 00:25:44.991 "assigned_rate_limits": { 00:25:44.991 "rw_ios_per_sec": 0, 00:25:44.991 "rw_mbytes_per_sec": 0, 00:25:44.991 "r_mbytes_per_sec": 0, 00:25:44.991 "w_mbytes_per_sec": 0 00:25:44.991 }, 00:25:44.991 "claimed": true, 00:25:44.991 "claim_type": "exclusive_write", 00:25:44.991 "zoned": false, 00:25:44.991 "supported_io_types": { 00:25:44.991 "read": true, 00:25:44.991 "write": true, 00:25:44.991 "unmap": true, 00:25:44.991 "flush": true, 00:25:44.991 "reset": true, 00:25:44.991 "nvme_admin": false, 00:25:44.991 "nvme_io": false, 00:25:44.991 "nvme_io_md": false, 00:25:44.991 "write_zeroes": true, 00:25:44.991 "zcopy": true, 00:25:44.991 "get_zone_info": false, 00:25:44.991 "zone_management": false, 00:25:44.991 "zone_append": false, 00:25:44.991 "compare": false, 00:25:44.991 "compare_and_write": false, 00:25:44.991 "abort": true, 00:25:44.991 "seek_hole": false, 00:25:44.991 "seek_data": false, 00:25:44.991 "copy": true, 00:25:44.991 "nvme_iov_md": false 00:25:44.991 }, 00:25:44.991 "memory_domains": [ 00:25:44.991 { 00:25:44.991 "dma_device_id": "system", 00:25:44.991 "dma_device_type": 1 00:25:44.991 }, 00:25:44.991 { 00:25:44.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.991 "dma_device_type": 2 00:25:44.991 } 00:25:44.991 ], 00:25:44.991 "driver_specific": {} 00:25:44.991 }' 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:44.991 09:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:45.277 "name": "BaseBdev4", 00:25:45.277 "aliases": [ 00:25:45.277 "e37f5d71-428f-11ef-a0af-c98d8ee52a94" 00:25:45.277 ], 00:25:45.277 "product_name": "Malloc disk", 00:25:45.277 "block_size": 512, 00:25:45.277 "num_blocks": 65536, 00:25:45.277 "uuid": "e37f5d71-428f-11ef-a0af-c98d8ee52a94", 00:25:45.277 "assigned_rate_limits": { 00:25:45.277 "rw_ios_per_sec": 0, 00:25:45.277 "rw_mbytes_per_sec": 0, 00:25:45.277 "r_mbytes_per_sec": 0, 00:25:45.277 "w_mbytes_per_sec": 0 00:25:45.277 }, 00:25:45.277 "claimed": true, 00:25:45.277 "claim_type": "exclusive_write", 00:25:45.277 "zoned": false, 00:25:45.277 "supported_io_types": { 00:25:45.277 "read": true, 00:25:45.277 "write": true, 00:25:45.277 "unmap": true, 00:25:45.277 "flush": true, 00:25:45.277 "reset": true, 00:25:45.277 "nvme_admin": false, 00:25:45.277 "nvme_io": false, 00:25:45.277 "nvme_io_md": false, 00:25:45.277 "write_zeroes": true, 00:25:45.277 "zcopy": true, 00:25:45.277 "get_zone_info": false, 00:25:45.277 "zone_management": false, 00:25:45.277 "zone_append": false, 00:25:45.277 "compare": false, 00:25:45.277 "compare_and_write": false, 00:25:45.277 "abort": true, 00:25:45.277 "seek_hole": false, 00:25:45.277 "seek_data": false, 00:25:45.277 "copy": true, 00:25:45.277 "nvme_iov_md": false 00:25:45.277 }, 00:25:45.277 "memory_domains": [ 00:25:45.277 { 00:25:45.277 "dma_device_id": "system", 00:25:45.277 "dma_device_type": 1 00:25:45.277 }, 00:25:45.277 { 00:25:45.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.277 "dma_device_type": 2 00:25:45.277 } 00:25:45.277 ], 00:25:45.277 "driver_specific": {} 00:25:45.277 }' 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.277 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:45.537 [2024-07-15 09:52:13.440260] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:45.537 [2024-07-15 09:52:13.440286] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:45.537 [2024-07-15 09:52:13.440302] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:45.537 [2024-07-15 09:52:13.440317] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:45.537 [2024-07-15 09:52:13.440320] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdabbfa34f00 name Existed_Raid, state offline 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61314 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61314 ']' 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61314 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61314 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61314' 00:25:45.537 killing process with pid 61314 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61314 00:25:45.537 [2024-07-15 09:52:13.471457] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:45.537 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61314 00:25:45.537 [2024-07-15 09:52:13.506222] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:45.795 09:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:45.795 00:25:45.795 real 0m22.530s 00:25:45.795 user 0m40.288s 00:25:45.795 sys 0m3.964s 00:25:45.795 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.795 ************************************ 00:25:45.795 END TEST raid_state_function_test_sb 00:25:45.795 ************************************ 00:25:45.795 09:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.795 09:52:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:45.795 09:52:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:25:45.795 09:52:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:45.795 09:52:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.795 09:52:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:45.795 ************************************ 00:25:45.795 START TEST raid_superblock_test 00:25:45.795 ************************************ 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62112 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62112 /var/tmp/spdk-raid.sock 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 62112 ']' 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.795 09:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.795 [2024-07-15 09:52:13.841802] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:45.795 [2024-07-15 09:52:13.842157] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:25:46.730 EAL: TSC is not safe to use in SMP mode 00:25:46.730 EAL: TSC is not invariant 00:25:46.730 [2024-07-15 09:52:14.561505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.730 [2024-07-15 09:52:14.676769] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:46.730 [2024-07-15 09:52:14.679283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.730 [2024-07-15 09:52:14.680000] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:46.730 [2024-07-15 09:52:14.680011] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:46.989 09:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:46.989 malloc1 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:47.248 [2024-07-15 09:52:15.331180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:47.248 [2024-07-15 09:52:15.331242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.248 [2024-07-15 09:52:15.331252] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2234780 00:25:47.248 [2024-07-15 09:52:15.331259] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.248 [2024-07-15 09:52:15.332242] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.248 [2024-07-15 09:52:15.332275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:47.248 pt1 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:47.248 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:47.507 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:47.507 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:47.507 malloc2 00:25:47.507 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:47.768 [2024-07-15 09:52:15.747219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:47.768 [2024-07-15 09:52:15.747291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.768 [2024-07-15 09:52:15.747303] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2234c80 00:25:47.768 [2024-07-15 09:52:15.747310] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.768 [2024-07-15 09:52:15.748092] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.768 [2024-07-15 09:52:15.748121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:47.768 pt2 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:47.768 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:48.027 malloc3 00:25:48.027 09:52:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:48.027 [2024-07-15 09:52:16.123218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:48.027 [2024-07-15 09:52:16.123285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.027 [2024-07-15 09:52:16.123295] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2235180 00:25:48.027 [2024-07-15 09:52:16.123302] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.027 [2024-07-15 09:52:16.123987] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.027 [2024-07-15 09:52:16.124017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:48.027 pt3 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:48.287 malloc4 00:25:48.287 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:48.549 [2024-07-15 09:52:16.547251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:48.549 [2024-07-15 09:52:16.547337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.549 [2024-07-15 09:52:16.547349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2235680 00:25:48.549 [2024-07-15 09:52:16.547357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.549 [2024-07-15 09:52:16.548168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.549 [2024-07-15 09:52:16.548195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:48.549 pt4 00:25:48.549 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:48.549 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:48.549 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:48.808 [2024-07-15 09:52:16.763297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:48.808 [2024-07-15 09:52:16.763944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:48.809 [2024-07-15 09:52:16.763969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:48.809 [2024-07-15 09:52:16.763979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:48.809 [2024-07-15 09:52:16.764032] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3f04d2235900 00:25:48.809 [2024-07-15 09:52:16.764036] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:48.809 [2024-07-15 09:52:16.764072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f04d2297e20 00:25:48.809 [2024-07-15 09:52:16.764147] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3f04d2235900 00:25:48.809 [2024-07-15 09:52:16.764151] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3f04d2235900 00:25:48.809 [2024-07-15 09:52:16.764172] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.809 09:52:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.068 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.068 "name": "raid_bdev1", 00:25:49.068 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:49.068 "strip_size_kb": 64, 00:25:49.068 "state": "online", 00:25:49.068 "raid_level": "concat", 00:25:49.068 "superblock": true, 00:25:49.068 "num_base_bdevs": 4, 00:25:49.068 "num_base_bdevs_discovered": 4, 00:25:49.068 "num_base_bdevs_operational": 4, 00:25:49.068 "base_bdevs_list": [ 00:25:49.068 { 00:25:49.068 "name": "pt1", 00:25:49.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:49.068 "is_configured": true, 00:25:49.068 "data_offset": 2048, 00:25:49.068 "data_size": 63488 00:25:49.068 }, 00:25:49.068 { 00:25:49.068 "name": "pt2", 00:25:49.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.068 "is_configured": true, 00:25:49.068 "data_offset": 2048, 00:25:49.068 "data_size": 63488 00:25:49.068 }, 00:25:49.068 { 00:25:49.068 "name": "pt3", 00:25:49.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:49.068 "is_configured": true, 00:25:49.068 "data_offset": 2048, 00:25:49.068 "data_size": 63488 00:25:49.068 }, 00:25:49.068 { 00:25:49.068 "name": "pt4", 00:25:49.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:49.068 "is_configured": true, 00:25:49.068 "data_offset": 2048, 00:25:49.068 "data_size": 63488 00:25:49.068 } 00:25:49.068 ] 00:25:49.068 }' 00:25:49.068 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.068 09:52:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.327 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:49.587 [2024-07-15 09:52:17.583386] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.587 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:49.587 "name": "raid_bdev1", 00:25:49.587 "aliases": [ 00:25:49.587 "eb533fd8-428f-11ef-a0af-c98d8ee52a94" 00:25:49.587 ], 00:25:49.587 "product_name": "Raid Volume", 00:25:49.587 "block_size": 512, 00:25:49.587 "num_blocks": 253952, 00:25:49.587 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:49.587 "assigned_rate_limits": { 00:25:49.587 "rw_ios_per_sec": 0, 00:25:49.587 "rw_mbytes_per_sec": 0, 00:25:49.587 "r_mbytes_per_sec": 0, 00:25:49.587 "w_mbytes_per_sec": 0 00:25:49.587 }, 00:25:49.587 "claimed": false, 00:25:49.587 "zoned": false, 00:25:49.587 "supported_io_types": { 00:25:49.587 "read": true, 00:25:49.587 "write": true, 00:25:49.587 "unmap": true, 00:25:49.587 "flush": true, 00:25:49.587 "reset": true, 00:25:49.587 "nvme_admin": false, 00:25:49.587 "nvme_io": false, 00:25:49.587 "nvme_io_md": false, 00:25:49.587 "write_zeroes": true, 00:25:49.587 "zcopy": false, 00:25:49.587 "get_zone_info": false, 00:25:49.587 "zone_management": false, 00:25:49.587 "zone_append": false, 00:25:49.587 "compare": false, 00:25:49.587 "compare_and_write": false, 00:25:49.587 "abort": false, 00:25:49.587 "seek_hole": false, 00:25:49.587 "seek_data": false, 00:25:49.587 "copy": false, 00:25:49.587 "nvme_iov_md": false 00:25:49.587 }, 00:25:49.587 "memory_domains": [ 00:25:49.587 { 00:25:49.587 "dma_device_id": "system", 00:25:49.587 "dma_device_type": 1 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.587 "dma_device_type": 2 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "system", 00:25:49.587 "dma_device_type": 1 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.587 "dma_device_type": 2 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "system", 00:25:49.587 "dma_device_type": 1 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.587 "dma_device_type": 2 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "system", 00:25:49.587 "dma_device_type": 1 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.587 "dma_device_type": 2 00:25:49.587 } 00:25:49.587 ], 00:25:49.587 "driver_specific": { 00:25:49.587 "raid": { 00:25:49.587 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:49.587 "strip_size_kb": 64, 00:25:49.587 "state": "online", 00:25:49.587 "raid_level": "concat", 00:25:49.587 "superblock": true, 00:25:49.587 "num_base_bdevs": 4, 00:25:49.587 "num_base_bdevs_discovered": 4, 00:25:49.587 "num_base_bdevs_operational": 4, 00:25:49.587 "base_bdevs_list": [ 00:25:49.587 { 00:25:49.587 "name": "pt1", 00:25:49.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:49.587 "is_configured": true, 00:25:49.587 "data_offset": 2048, 00:25:49.587 "data_size": 63488 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "name": "pt2", 00:25:49.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.587 "is_configured": true, 00:25:49.587 "data_offset": 2048, 00:25:49.587 "data_size": 63488 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "name": "pt3", 00:25:49.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:49.587 "is_configured": true, 00:25:49.587 "data_offset": 2048, 00:25:49.587 "data_size": 63488 00:25:49.587 }, 00:25:49.587 { 00:25:49.587 "name": "pt4", 00:25:49.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:49.587 "is_configured": true, 00:25:49.587 "data_offset": 2048, 00:25:49.587 "data_size": 63488 00:25:49.587 } 00:25:49.587 ] 00:25:49.587 } 00:25:49.587 } 00:25:49.587 }' 00:25:49.587 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:49.587 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:49.587 pt2 00:25:49.587 pt3 00:25:49.587 pt4' 00:25:49.587 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.587 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:49.587 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.847 "name": "pt1", 00:25:49.847 "aliases": [ 00:25:49.847 "00000000-0000-0000-0000-000000000001" 00:25:49.847 ], 00:25:49.847 "product_name": "passthru", 00:25:49.847 "block_size": 512, 00:25:49.847 "num_blocks": 65536, 00:25:49.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:49.847 "assigned_rate_limits": { 00:25:49.847 "rw_ios_per_sec": 0, 00:25:49.847 "rw_mbytes_per_sec": 0, 00:25:49.847 "r_mbytes_per_sec": 0, 00:25:49.847 "w_mbytes_per_sec": 0 00:25:49.847 }, 00:25:49.847 "claimed": true, 00:25:49.847 "claim_type": "exclusive_write", 00:25:49.847 "zoned": false, 00:25:49.847 "supported_io_types": { 00:25:49.847 "read": true, 00:25:49.847 "write": true, 00:25:49.847 "unmap": true, 00:25:49.847 "flush": true, 00:25:49.847 "reset": true, 00:25:49.847 "nvme_admin": false, 00:25:49.847 "nvme_io": false, 00:25:49.847 "nvme_io_md": false, 00:25:49.847 "write_zeroes": true, 00:25:49.847 "zcopy": true, 00:25:49.847 "get_zone_info": false, 00:25:49.847 "zone_management": false, 00:25:49.847 "zone_append": false, 00:25:49.847 "compare": false, 00:25:49.847 "compare_and_write": false, 00:25:49.847 "abort": true, 00:25:49.847 "seek_hole": false, 00:25:49.847 "seek_data": false, 00:25:49.847 "copy": true, 00:25:49.847 "nvme_iov_md": false 00:25:49.847 }, 00:25:49.847 "memory_domains": [ 00:25:49.847 { 00:25:49.847 "dma_device_id": "system", 00:25:49.847 "dma_device_type": 1 00:25:49.847 }, 00:25:49.847 { 00:25:49.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.847 "dma_device_type": 2 00:25:49.847 } 00:25:49.847 ], 00:25:49.847 "driver_specific": { 00:25:49.847 "passthru": { 00:25:49.847 "name": "pt1", 00:25:49.847 "base_bdev_name": "malloc1" 00:25:49.847 } 00:25:49.847 } 00:25:49.847 }' 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:49.847 09:52:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.105 "name": "pt2", 00:25:50.105 "aliases": [ 00:25:50.105 "00000000-0000-0000-0000-000000000002" 00:25:50.105 ], 00:25:50.105 "product_name": "passthru", 00:25:50.105 "block_size": 512, 00:25:50.105 "num_blocks": 65536, 00:25:50.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:50.105 "assigned_rate_limits": { 00:25:50.105 "rw_ios_per_sec": 0, 00:25:50.105 "rw_mbytes_per_sec": 0, 00:25:50.105 "r_mbytes_per_sec": 0, 00:25:50.105 "w_mbytes_per_sec": 0 00:25:50.105 }, 00:25:50.105 "claimed": true, 00:25:50.105 "claim_type": "exclusive_write", 00:25:50.105 "zoned": false, 00:25:50.105 "supported_io_types": { 00:25:50.105 "read": true, 00:25:50.105 "write": true, 00:25:50.105 "unmap": true, 00:25:50.105 "flush": true, 00:25:50.105 "reset": true, 00:25:50.105 "nvme_admin": false, 00:25:50.105 "nvme_io": false, 00:25:50.105 "nvme_io_md": false, 00:25:50.105 "write_zeroes": true, 00:25:50.105 "zcopy": true, 00:25:50.105 "get_zone_info": false, 00:25:50.105 "zone_management": false, 00:25:50.105 "zone_append": false, 00:25:50.105 "compare": false, 00:25:50.105 "compare_and_write": false, 00:25:50.105 "abort": true, 00:25:50.105 "seek_hole": false, 00:25:50.105 "seek_data": false, 00:25:50.105 "copy": true, 00:25:50.105 "nvme_iov_md": false 00:25:50.105 }, 00:25:50.105 "memory_domains": [ 00:25:50.105 { 00:25:50.105 "dma_device_id": "system", 00:25:50.105 "dma_device_type": 1 00:25:50.105 }, 00:25:50.105 { 00:25:50.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.105 "dma_device_type": 2 00:25:50.105 } 00:25:50.105 ], 00:25:50.105 "driver_specific": { 00:25:50.105 "passthru": { 00:25:50.105 "name": "pt2", 00:25:50.105 "base_bdev_name": "malloc2" 00:25:50.105 } 00:25:50.105 } 00:25:50.105 }' 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.105 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.363 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.363 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.363 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:50.363 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.363 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.363 "name": "pt3", 00:25:50.363 "aliases": [ 00:25:50.363 "00000000-0000-0000-0000-000000000003" 00:25:50.363 ], 00:25:50.363 "product_name": "passthru", 00:25:50.363 "block_size": 512, 00:25:50.363 "num_blocks": 65536, 00:25:50.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:50.363 "assigned_rate_limits": { 00:25:50.363 "rw_ios_per_sec": 0, 00:25:50.363 "rw_mbytes_per_sec": 0, 00:25:50.363 "r_mbytes_per_sec": 0, 00:25:50.363 "w_mbytes_per_sec": 0 00:25:50.363 }, 00:25:50.363 "claimed": true, 00:25:50.363 "claim_type": "exclusive_write", 00:25:50.363 "zoned": false, 00:25:50.363 "supported_io_types": { 00:25:50.363 "read": true, 00:25:50.363 "write": true, 00:25:50.363 "unmap": true, 00:25:50.363 "flush": true, 00:25:50.363 "reset": true, 00:25:50.363 "nvme_admin": false, 00:25:50.363 "nvme_io": false, 00:25:50.363 "nvme_io_md": false, 00:25:50.363 "write_zeroes": true, 00:25:50.363 "zcopy": true, 00:25:50.363 "get_zone_info": false, 00:25:50.363 "zone_management": false, 00:25:50.363 "zone_append": false, 00:25:50.363 "compare": false, 00:25:50.363 "compare_and_write": false, 00:25:50.363 "abort": true, 00:25:50.363 "seek_hole": false, 00:25:50.363 "seek_data": false, 00:25:50.363 "copy": true, 00:25:50.363 "nvme_iov_md": false 00:25:50.363 }, 00:25:50.363 "memory_domains": [ 00:25:50.363 { 00:25:50.363 "dma_device_id": "system", 00:25:50.363 "dma_device_type": 1 00:25:50.363 }, 00:25:50.363 { 00:25:50.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.363 "dma_device_type": 2 00:25:50.363 } 00:25:50.363 ], 00:25:50.363 "driver_specific": { 00:25:50.363 "passthru": { 00:25:50.363 "name": "pt3", 00:25:50.363 "base_bdev_name": "malloc3" 00:25:50.363 } 00:25:50.363 } 00:25:50.363 }' 00:25:50.363 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.364 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.364 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.364 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.364 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.364 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.364 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.626 "name": "pt4", 00:25:50.626 "aliases": [ 00:25:50.626 "00000000-0000-0000-0000-000000000004" 00:25:50.626 ], 00:25:50.626 "product_name": "passthru", 00:25:50.626 "block_size": 512, 00:25:50.626 "num_blocks": 65536, 00:25:50.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:50.626 "assigned_rate_limits": { 00:25:50.626 "rw_ios_per_sec": 0, 00:25:50.626 "rw_mbytes_per_sec": 0, 00:25:50.626 "r_mbytes_per_sec": 0, 00:25:50.626 "w_mbytes_per_sec": 0 00:25:50.626 }, 00:25:50.626 "claimed": true, 00:25:50.626 "claim_type": "exclusive_write", 00:25:50.626 "zoned": false, 00:25:50.626 "supported_io_types": { 00:25:50.626 "read": true, 00:25:50.626 "write": true, 00:25:50.626 "unmap": true, 00:25:50.626 "flush": true, 00:25:50.626 "reset": true, 00:25:50.626 "nvme_admin": false, 00:25:50.626 "nvme_io": false, 00:25:50.626 "nvme_io_md": false, 00:25:50.626 "write_zeroes": true, 00:25:50.626 "zcopy": true, 00:25:50.626 "get_zone_info": false, 00:25:50.626 "zone_management": false, 00:25:50.626 "zone_append": false, 00:25:50.626 "compare": false, 00:25:50.626 "compare_and_write": false, 00:25:50.626 "abort": true, 00:25:50.626 "seek_hole": false, 00:25:50.626 "seek_data": false, 00:25:50.626 "copy": true, 00:25:50.626 "nvme_iov_md": false 00:25:50.626 }, 00:25:50.626 "memory_domains": [ 00:25:50.626 { 00:25:50.626 "dma_device_id": "system", 00:25:50.626 "dma_device_type": 1 00:25:50.626 }, 00:25:50.626 { 00:25:50.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.626 "dma_device_type": 2 00:25:50.626 } 00:25:50.626 ], 00:25:50.626 "driver_specific": { 00:25:50.626 "passthru": { 00:25:50.626 "name": "pt4", 00:25:50.626 "base_bdev_name": "malloc4" 00:25:50.626 } 00:25:50.626 } 00:25:50.626 }' 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.626 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:50.886 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:50.886 [2024-07-15 09:52:18.975475] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:51.146 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=eb533fd8-428f-11ef-a0af-c98d8ee52a94 00:25:51.146 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z eb533fd8-428f-11ef-a0af-c98d8ee52a94 ']' 00:25:51.146 09:52:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:51.146 [2024-07-15 09:52:19.171421] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:51.146 [2024-07-15 09:52:19.171449] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:51.146 [2024-07-15 09:52:19.171470] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.146 [2024-07-15 09:52:19.171489] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.146 [2024-07-15 09:52:19.171493] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f04d2235900 name raid_bdev1, state offline 00:25:51.146 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.146 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:51.405 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:51.405 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:51.405 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:51.405 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:51.673 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:51.673 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:51.933 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:51.933 09:52:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:52.192 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:52.192 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:52.452 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:52.713 [2024-07-15 09:52:20.711533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:52.713 [2024-07-15 09:52:20.712246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:52.713 [2024-07-15 09:52:20.712269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:52.713 [2024-07-15 09:52:20.712278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:52.713 [2024-07-15 09:52:20.712292] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:52.713 [2024-07-15 09:52:20.712334] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:52.713 [2024-07-15 09:52:20.712359] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:52.713 [2024-07-15 09:52:20.712367] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:52.713 [2024-07-15 09:52:20.712374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:52.713 [2024-07-15 09:52:20.712379] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f04d2235680 name raid_bdev1, state configuring 00:25:52.713 request: 00:25:52.713 { 00:25:52.713 "name": "raid_bdev1", 00:25:52.713 "raid_level": "concat", 00:25:52.713 "base_bdevs": [ 00:25:52.713 "malloc1", 00:25:52.713 "malloc2", 00:25:52.713 "malloc3", 00:25:52.713 "malloc4" 00:25:52.713 ], 00:25:52.713 "strip_size_kb": 64, 00:25:52.713 "superblock": false, 00:25:52.713 "method": "bdev_raid_create", 00:25:52.713 "req_id": 1 00:25:52.713 } 00:25:52.713 Got JSON-RPC error response 00:25:52.713 response: 00:25:52.713 { 00:25:52.713 "code": -17, 00:25:52.713 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:52.713 } 00:25:52.713 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:52.713 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:52.713 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:52.713 09:52:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:52.713 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.713 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:52.992 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:52.992 09:52:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:52.992 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:53.262 [2024-07-15 09:52:21.171517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:53.262 [2024-07-15 09:52:21.171581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.262 [2024-07-15 09:52:21.171590] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2235180 00:25:53.262 [2024-07-15 09:52:21.171597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.262 [2024-07-15 09:52:21.172345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.262 [2024-07-15 09:52:21.172367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:53.262 [2024-07-15 09:52:21.172387] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:53.262 [2024-07-15 09:52:21.172399] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:53.262 pt1 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.262 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.523 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.523 "name": "raid_bdev1", 00:25:53.523 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:53.523 "strip_size_kb": 64, 00:25:53.523 "state": "configuring", 00:25:53.523 "raid_level": "concat", 00:25:53.523 "superblock": true, 00:25:53.523 "num_base_bdevs": 4, 00:25:53.523 "num_base_bdevs_discovered": 1, 00:25:53.523 "num_base_bdevs_operational": 4, 00:25:53.523 "base_bdevs_list": [ 00:25:53.523 { 00:25:53.523 "name": "pt1", 00:25:53.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:53.523 "is_configured": true, 00:25:53.523 "data_offset": 2048, 00:25:53.523 "data_size": 63488 00:25:53.523 }, 00:25:53.523 { 00:25:53.523 "name": null, 00:25:53.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:53.523 "is_configured": false, 00:25:53.523 "data_offset": 2048, 00:25:53.523 "data_size": 63488 00:25:53.523 }, 00:25:53.523 { 00:25:53.523 "name": null, 00:25:53.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:53.523 "is_configured": false, 00:25:53.523 "data_offset": 2048, 00:25:53.523 "data_size": 63488 00:25:53.523 }, 00:25:53.523 { 00:25:53.523 "name": null, 00:25:53.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:53.523 "is_configured": false, 00:25:53.523 "data_offset": 2048, 00:25:53.523 "data_size": 63488 00:25:53.523 } 00:25:53.523 ] 00:25:53.523 }' 00:25:53.523 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.523 09:52:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.783 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:53.783 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:54.042 [2024-07-15 09:52:21.891557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:54.042 [2024-07-15 09:52:21.891630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.042 [2024-07-15 09:52:21.891641] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2234780 00:25:54.042 [2024-07-15 09:52:21.891648] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.042 [2024-07-15 09:52:21.891779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.042 [2024-07-15 09:52:21.891787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:54.042 [2024-07-15 09:52:21.891808] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:54.042 [2024-07-15 09:52:21.891815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:54.042 pt2 00:25:54.042 09:52:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:54.042 [2024-07-15 09:52:22.083566] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.042 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.302 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.302 "name": "raid_bdev1", 00:25:54.302 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:54.302 "strip_size_kb": 64, 00:25:54.302 "state": "configuring", 00:25:54.302 "raid_level": "concat", 00:25:54.302 "superblock": true, 00:25:54.302 "num_base_bdevs": 4, 00:25:54.302 "num_base_bdevs_discovered": 1, 00:25:54.302 "num_base_bdevs_operational": 4, 00:25:54.302 "base_bdevs_list": [ 00:25:54.302 { 00:25:54.302 "name": "pt1", 00:25:54.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:54.302 "is_configured": true, 00:25:54.302 "data_offset": 2048, 00:25:54.302 "data_size": 63488 00:25:54.302 }, 00:25:54.302 { 00:25:54.302 "name": null, 00:25:54.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:54.302 "is_configured": false, 00:25:54.302 "data_offset": 2048, 00:25:54.302 "data_size": 63488 00:25:54.302 }, 00:25:54.302 { 00:25:54.302 "name": null, 00:25:54.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:54.302 "is_configured": false, 00:25:54.302 "data_offset": 2048, 00:25:54.302 "data_size": 63488 00:25:54.302 }, 00:25:54.302 { 00:25:54.302 "name": null, 00:25:54.302 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:54.302 "is_configured": false, 00:25:54.302 "data_offset": 2048, 00:25:54.302 "data_size": 63488 00:25:54.302 } 00:25:54.302 ] 00:25:54.302 }' 00:25:54.302 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.302 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.561 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:54.561 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:54.561 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:54.819 [2024-07-15 09:52:22.827628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:54.819 [2024-07-15 09:52:22.827702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.819 [2024-07-15 09:52:22.827713] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2234780 00:25:54.819 [2024-07-15 09:52:22.827720] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.819 [2024-07-15 09:52:22.827852] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.819 [2024-07-15 09:52:22.827860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:54.819 [2024-07-15 09:52:22.827883] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:54.819 [2024-07-15 09:52:22.827890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:54.819 pt2 00:25:54.819 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:54.819 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:54.819 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:55.078 [2024-07-15 09:52:23.039619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:55.078 [2024-07-15 09:52:23.039676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.078 [2024-07-15 09:52:23.039686] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2235b80 00:25:55.078 [2024-07-15 09:52:23.039692] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.078 [2024-07-15 09:52:23.039809] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.078 [2024-07-15 09:52:23.039816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:55.078 [2024-07-15 09:52:23.039835] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:55.078 [2024-07-15 09:52:23.039842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:55.078 pt3 00:25:55.078 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:55.078 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:55.078 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:55.338 [2024-07-15 09:52:23.235624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:55.338 [2024-07-15 09:52:23.235673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.338 [2024-07-15 09:52:23.235681] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f04d2235900 00:25:55.338 [2024-07-15 09:52:23.235688] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.338 [2024-07-15 09:52:23.235774] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.338 [2024-07-15 09:52:23.235781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:55.338 [2024-07-15 09:52:23.235796] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:55.338 [2024-07-15 09:52:23.235803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:55.338 [2024-07-15 09:52:23.235828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3f04d2234c80 00:25:55.338 [2024-07-15 09:52:23.235831] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:55.338 [2024-07-15 09:52:23.235849] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f04d2297e20 00:25:55.338 [2024-07-15 09:52:23.235891] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3f04d2234c80 00:25:55.338 [2024-07-15 09:52:23.235894] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3f04d2234c80 00:25:55.338 [2024-07-15 09:52:23.235915] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.338 pt4 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:55.338 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.339 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.597 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.597 "name": "raid_bdev1", 00:25:55.597 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:55.597 "strip_size_kb": 64, 00:25:55.597 "state": "online", 00:25:55.597 "raid_level": "concat", 00:25:55.597 "superblock": true, 00:25:55.598 "num_base_bdevs": 4, 00:25:55.598 "num_base_bdevs_discovered": 4, 00:25:55.598 "num_base_bdevs_operational": 4, 00:25:55.598 "base_bdevs_list": [ 00:25:55.598 { 00:25:55.598 "name": "pt1", 00:25:55.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:55.598 "is_configured": true, 00:25:55.598 "data_offset": 2048, 00:25:55.598 "data_size": 63488 00:25:55.598 }, 00:25:55.598 { 00:25:55.598 "name": "pt2", 00:25:55.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:55.598 "is_configured": true, 00:25:55.598 "data_offset": 2048, 00:25:55.598 "data_size": 63488 00:25:55.598 }, 00:25:55.598 { 00:25:55.598 "name": "pt3", 00:25:55.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:55.598 "is_configured": true, 00:25:55.598 "data_offset": 2048, 00:25:55.598 "data_size": 63488 00:25:55.598 }, 00:25:55.598 { 00:25:55.598 "name": "pt4", 00:25:55.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:55.598 "is_configured": true, 00:25:55.598 "data_offset": 2048, 00:25:55.598 "data_size": 63488 00:25:55.598 } 00:25:55.598 ] 00:25:55.598 }' 00:25:55.598 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.598 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:55.857 [2024-07-15 09:52:23.899691] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:55.857 "name": "raid_bdev1", 00:25:55.857 "aliases": [ 00:25:55.857 "eb533fd8-428f-11ef-a0af-c98d8ee52a94" 00:25:55.857 ], 00:25:55.857 "product_name": "Raid Volume", 00:25:55.857 "block_size": 512, 00:25:55.857 "num_blocks": 253952, 00:25:55.857 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:55.857 "assigned_rate_limits": { 00:25:55.857 "rw_ios_per_sec": 0, 00:25:55.857 "rw_mbytes_per_sec": 0, 00:25:55.857 "r_mbytes_per_sec": 0, 00:25:55.857 "w_mbytes_per_sec": 0 00:25:55.857 }, 00:25:55.857 "claimed": false, 00:25:55.857 "zoned": false, 00:25:55.857 "supported_io_types": { 00:25:55.857 "read": true, 00:25:55.857 "write": true, 00:25:55.857 "unmap": true, 00:25:55.857 "flush": true, 00:25:55.857 "reset": true, 00:25:55.857 "nvme_admin": false, 00:25:55.857 "nvme_io": false, 00:25:55.857 "nvme_io_md": false, 00:25:55.857 "write_zeroes": true, 00:25:55.857 "zcopy": false, 00:25:55.857 "get_zone_info": false, 00:25:55.857 "zone_management": false, 00:25:55.857 "zone_append": false, 00:25:55.857 "compare": false, 00:25:55.857 "compare_and_write": false, 00:25:55.857 "abort": false, 00:25:55.857 "seek_hole": false, 00:25:55.857 "seek_data": false, 00:25:55.857 "copy": false, 00:25:55.857 "nvme_iov_md": false 00:25:55.857 }, 00:25:55.857 "memory_domains": [ 00:25:55.857 { 00:25:55.857 "dma_device_id": "system", 00:25:55.857 "dma_device_type": 1 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.857 "dma_device_type": 2 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "system", 00:25:55.857 "dma_device_type": 1 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.857 "dma_device_type": 2 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "system", 00:25:55.857 "dma_device_type": 1 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.857 "dma_device_type": 2 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "system", 00:25:55.857 "dma_device_type": 1 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.857 "dma_device_type": 2 00:25:55.857 } 00:25:55.857 ], 00:25:55.857 "driver_specific": { 00:25:55.857 "raid": { 00:25:55.857 "uuid": "eb533fd8-428f-11ef-a0af-c98d8ee52a94", 00:25:55.857 "strip_size_kb": 64, 00:25:55.857 "state": "online", 00:25:55.857 "raid_level": "concat", 00:25:55.857 "superblock": true, 00:25:55.857 "num_base_bdevs": 4, 00:25:55.857 "num_base_bdevs_discovered": 4, 00:25:55.857 "num_base_bdevs_operational": 4, 00:25:55.857 "base_bdevs_list": [ 00:25:55.857 { 00:25:55.857 "name": "pt1", 00:25:55.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "pt2", 00:25:55.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "pt3", 00:25:55.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "pt4", 00:25:55.857 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 } 00:25:55.857 ] 00:25:55.857 } 00:25:55.857 } 00:25:55.857 }' 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:55.857 pt2 00:25:55.857 pt3 00:25:55.857 pt4' 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:55.857 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:56.114 "name": "pt1", 00:25:56.114 "aliases": [ 00:25:56.114 "00000000-0000-0000-0000-000000000001" 00:25:56.114 ], 00:25:56.114 "product_name": "passthru", 00:25:56.114 "block_size": 512, 00:25:56.114 "num_blocks": 65536, 00:25:56.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:56.114 "assigned_rate_limits": { 00:25:56.114 "rw_ios_per_sec": 0, 00:25:56.114 "rw_mbytes_per_sec": 0, 00:25:56.114 "r_mbytes_per_sec": 0, 00:25:56.114 "w_mbytes_per_sec": 0 00:25:56.114 }, 00:25:56.114 "claimed": true, 00:25:56.114 "claim_type": "exclusive_write", 00:25:56.114 "zoned": false, 00:25:56.114 "supported_io_types": { 00:25:56.114 "read": true, 00:25:56.114 "write": true, 00:25:56.114 "unmap": true, 00:25:56.114 "flush": true, 00:25:56.114 "reset": true, 00:25:56.114 "nvme_admin": false, 00:25:56.114 "nvme_io": false, 00:25:56.114 "nvme_io_md": false, 00:25:56.114 "write_zeroes": true, 00:25:56.114 "zcopy": true, 00:25:56.114 "get_zone_info": false, 00:25:56.114 "zone_management": false, 00:25:56.114 "zone_append": false, 00:25:56.114 "compare": false, 00:25:56.114 "compare_and_write": false, 00:25:56.114 "abort": true, 00:25:56.114 "seek_hole": false, 00:25:56.114 "seek_data": false, 00:25:56.114 "copy": true, 00:25:56.114 "nvme_iov_md": false 00:25:56.114 }, 00:25:56.114 "memory_domains": [ 00:25:56.114 { 00:25:56.114 "dma_device_id": "system", 00:25:56.114 "dma_device_type": 1 00:25:56.114 }, 00:25:56.114 { 00:25:56.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.114 "dma_device_type": 2 00:25:56.114 } 00:25:56.114 ], 00:25:56.114 "driver_specific": { 00:25:56.114 "passthru": { 00:25:56.114 "name": "pt1", 00:25:56.114 "base_bdev_name": "malloc1" 00:25:56.114 } 00:25:56.114 } 00:25:56.114 }' 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:56.114 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:56.371 "name": "pt2", 00:25:56.371 "aliases": [ 00:25:56.371 "00000000-0000-0000-0000-000000000002" 00:25:56.371 ], 00:25:56.371 "product_name": "passthru", 00:25:56.371 "block_size": 512, 00:25:56.371 "num_blocks": 65536, 00:25:56.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:56.371 "assigned_rate_limits": { 00:25:56.371 "rw_ios_per_sec": 0, 00:25:56.371 "rw_mbytes_per_sec": 0, 00:25:56.371 "r_mbytes_per_sec": 0, 00:25:56.371 "w_mbytes_per_sec": 0 00:25:56.371 }, 00:25:56.371 "claimed": true, 00:25:56.371 "claim_type": "exclusive_write", 00:25:56.371 "zoned": false, 00:25:56.371 "supported_io_types": { 00:25:56.371 "read": true, 00:25:56.371 "write": true, 00:25:56.371 "unmap": true, 00:25:56.371 "flush": true, 00:25:56.371 "reset": true, 00:25:56.371 "nvme_admin": false, 00:25:56.371 "nvme_io": false, 00:25:56.371 "nvme_io_md": false, 00:25:56.371 "write_zeroes": true, 00:25:56.371 "zcopy": true, 00:25:56.371 "get_zone_info": false, 00:25:56.371 "zone_management": false, 00:25:56.371 "zone_append": false, 00:25:56.371 "compare": false, 00:25:56.371 "compare_and_write": false, 00:25:56.371 "abort": true, 00:25:56.371 "seek_hole": false, 00:25:56.371 "seek_data": false, 00:25:56.371 "copy": true, 00:25:56.371 "nvme_iov_md": false 00:25:56.371 }, 00:25:56.371 "memory_domains": [ 00:25:56.371 { 00:25:56.371 "dma_device_id": "system", 00:25:56.371 "dma_device_type": 1 00:25:56.371 }, 00:25:56.371 { 00:25:56.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.371 "dma_device_type": 2 00:25:56.371 } 00:25:56.371 ], 00:25:56.371 "driver_specific": { 00:25:56.371 "passthru": { 00:25:56.371 "name": "pt2", 00:25:56.371 "base_bdev_name": "malloc2" 00:25:56.371 } 00:25:56.371 } 00:25:56.371 }' 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:56.371 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:56.629 "name": "pt3", 00:25:56.629 "aliases": [ 00:25:56.629 "00000000-0000-0000-0000-000000000003" 00:25:56.629 ], 00:25:56.629 "product_name": "passthru", 00:25:56.629 "block_size": 512, 00:25:56.629 "num_blocks": 65536, 00:25:56.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:56.629 "assigned_rate_limits": { 00:25:56.629 "rw_ios_per_sec": 0, 00:25:56.629 "rw_mbytes_per_sec": 0, 00:25:56.629 "r_mbytes_per_sec": 0, 00:25:56.629 "w_mbytes_per_sec": 0 00:25:56.629 }, 00:25:56.629 "claimed": true, 00:25:56.629 "claim_type": "exclusive_write", 00:25:56.629 "zoned": false, 00:25:56.629 "supported_io_types": { 00:25:56.629 "read": true, 00:25:56.629 "write": true, 00:25:56.629 "unmap": true, 00:25:56.629 "flush": true, 00:25:56.629 "reset": true, 00:25:56.629 "nvme_admin": false, 00:25:56.629 "nvme_io": false, 00:25:56.629 "nvme_io_md": false, 00:25:56.629 "write_zeroes": true, 00:25:56.629 "zcopy": true, 00:25:56.629 "get_zone_info": false, 00:25:56.629 "zone_management": false, 00:25:56.629 "zone_append": false, 00:25:56.629 "compare": false, 00:25:56.629 "compare_and_write": false, 00:25:56.629 "abort": true, 00:25:56.629 "seek_hole": false, 00:25:56.629 "seek_data": false, 00:25:56.629 "copy": true, 00:25:56.629 "nvme_iov_md": false 00:25:56.629 }, 00:25:56.629 "memory_domains": [ 00:25:56.629 { 00:25:56.629 "dma_device_id": "system", 00:25:56.629 "dma_device_type": 1 00:25:56.629 }, 00:25:56.629 { 00:25:56.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.629 "dma_device_type": 2 00:25:56.629 } 00:25:56.629 ], 00:25:56.629 "driver_specific": { 00:25:56.629 "passthru": { 00:25:56.629 "name": "pt3", 00:25:56.629 "base_bdev_name": "malloc3" 00:25:56.629 } 00:25:56.629 } 00:25:56.629 }' 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.629 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:56.886 "name": "pt4", 00:25:56.886 "aliases": [ 00:25:56.886 "00000000-0000-0000-0000-000000000004" 00:25:56.886 ], 00:25:56.886 "product_name": "passthru", 00:25:56.886 "block_size": 512, 00:25:56.886 "num_blocks": 65536, 00:25:56.886 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:56.886 "assigned_rate_limits": { 00:25:56.886 "rw_ios_per_sec": 0, 00:25:56.886 "rw_mbytes_per_sec": 0, 00:25:56.886 "r_mbytes_per_sec": 0, 00:25:56.886 "w_mbytes_per_sec": 0 00:25:56.886 }, 00:25:56.886 "claimed": true, 00:25:56.886 "claim_type": "exclusive_write", 00:25:56.886 "zoned": false, 00:25:56.886 "supported_io_types": { 00:25:56.886 "read": true, 00:25:56.886 "write": true, 00:25:56.886 "unmap": true, 00:25:56.886 "flush": true, 00:25:56.886 "reset": true, 00:25:56.886 "nvme_admin": false, 00:25:56.886 "nvme_io": false, 00:25:56.886 "nvme_io_md": false, 00:25:56.886 "write_zeroes": true, 00:25:56.886 "zcopy": true, 00:25:56.886 "get_zone_info": false, 00:25:56.886 "zone_management": false, 00:25:56.886 "zone_append": false, 00:25:56.886 "compare": false, 00:25:56.886 "compare_and_write": false, 00:25:56.886 "abort": true, 00:25:56.886 "seek_hole": false, 00:25:56.886 "seek_data": false, 00:25:56.886 "copy": true, 00:25:56.886 "nvme_iov_md": false 00:25:56.886 }, 00:25:56.886 "memory_domains": [ 00:25:56.886 { 00:25:56.886 "dma_device_id": "system", 00:25:56.886 "dma_device_type": 1 00:25:56.886 }, 00:25:56.886 { 00:25:56.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.886 "dma_device_type": 2 00:25:56.886 } 00:25:56.886 ], 00:25:56.886 "driver_specific": { 00:25:56.886 "passthru": { 00:25:56.886 "name": "pt4", 00:25:56.886 "base_bdev_name": "malloc4" 00:25:56.886 } 00:25:56.886 } 00:25:56.886 }' 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:56.886 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.153 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:57.153 [2024-07-15 09:52:25.231745] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' eb533fd8-428f-11ef-a0af-c98d8ee52a94 '!=' eb533fd8-428f-11ef-a0af-c98d8ee52a94 ']' 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62112 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 62112 ']' 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 62112 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:25:57.153 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 62112 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:25:57.410 killing process with pid 62112 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62112' 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 62112 00:25:57.410 [2024-07-15 09:52:25.263859] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:57.410 [2024-07-15 09:52:25.263891] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.410 [2024-07-15 09:52:25.263910] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:57.410 [2024-07-15 09:52:25.263914] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f04d2234c80 name raid_bdev1, state offline 00:25:57.410 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 62112 00:25:57.410 [2024-07-15 09:52:25.298318] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:57.668 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:25:57.668 00:25:57.668 real 0m11.722s 00:25:57.668 user 0m19.948s 00:25:57.668 sys 0m2.688s 00:25:57.668 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.668 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.668 ************************************ 00:25:57.668 END TEST raid_superblock_test 00:25:57.668 ************************************ 00:25:57.668 09:52:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:57.668 09:52:25 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:57.668 09:52:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:57.668 09:52:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.668 09:52:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.668 ************************************ 00:25:57.668 START TEST raid_read_error_test 00:25:57.668 ************************************ 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ujktgU8Yyl 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62505 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62505 /var/tmp/spdk-raid.sock 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62505 ']' 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.668 09:52:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.668 [2024-07-15 09:52:25.635907] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:57.668 [2024-07-15 09:52:25.636232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:25:58.603 EAL: TSC is not safe to use in SMP mode 00:25:58.603 EAL: TSC is not invariant 00:25:58.603 [2024-07-15 09:52:26.349788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.603 [2024-07-15 09:52:26.463595] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:58.603 [2024-07-15 09:52:26.466046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.603 [2024-07-15 09:52:26.466748] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.603 [2024-07-15 09:52:26.466760] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.603 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.603 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:58.603 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:58.603 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:58.862 BaseBdev1_malloc 00:25:58.862 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:59.120 true 00:25:59.120 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:59.120 [2024-07-15 09:52:27.189668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:59.120 [2024-07-15 09:52:27.189734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.120 [2024-07-15 09:52:27.189765] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1482b1e34780 00:25:59.120 [2024-07-15 09:52:27.189773] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.120 [2024-07-15 09:52:27.190454] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.120 [2024-07-15 09:52:27.190482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:59.120 BaseBdev1 00:25:59.120 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:59.120 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:59.379 BaseBdev2_malloc 00:25:59.379 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:59.639 true 00:25:59.639 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:59.898 [2024-07-15 09:52:27.765711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:59.898 [2024-07-15 09:52:27.765781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.898 [2024-07-15 09:52:27.765825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1482b1e34c80 00:25:59.898 [2024-07-15 09:52:27.765833] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.898 [2024-07-15 09:52:27.766615] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.898 [2024-07-15 09:52:27.766643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:59.898 BaseBdev2 00:25:59.898 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:59.898 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:59.898 BaseBdev3_malloc 00:25:59.898 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:00.158 true 00:26:00.158 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:00.418 [2024-07-15 09:52:28.317714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:00.418 [2024-07-15 09:52:28.317774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.418 [2024-07-15 09:52:28.317804] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1482b1e35180 00:26:00.418 [2024-07-15 09:52:28.317811] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.418 [2024-07-15 09:52:28.318468] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.418 [2024-07-15 09:52:28.318494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:00.418 BaseBdev3 00:26:00.418 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:00.418 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:00.675 BaseBdev4_malloc 00:26:00.675 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:00.675 true 00:26:00.675 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:00.933 [2024-07-15 09:52:28.929755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:00.933 [2024-07-15 09:52:28.929822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.933 [2024-07-15 09:52:28.929854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1482b1e35680 00:26:00.933 [2024-07-15 09:52:28.929861] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.933 [2024-07-15 09:52:28.930605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.933 [2024-07-15 09:52:28.930631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:00.933 BaseBdev4 00:26:00.933 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:01.192 [2024-07-15 09:52:29.145783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:01.192 [2024-07-15 09:52:29.146475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:01.192 [2024-07-15 09:52:29.146503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:01.192 [2024-07-15 09:52:29.146517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:01.192 [2024-07-15 09:52:29.146579] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1482b1e35900 00:26:01.193 [2024-07-15 09:52:29.146584] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:01.193 [2024-07-15 09:52:29.146628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1482b1ea0e20 00:26:01.193 [2024-07-15 09:52:29.146705] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1482b1e35900 00:26:01.193 [2024-07-15 09:52:29.146708] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1482b1e35900 00:26:01.193 [2024-07-15 09:52:29.146732] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.193 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.452 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.452 "name": "raid_bdev1", 00:26:01.452 "uuid": "f2b4aabc-428f-11ef-a0af-c98d8ee52a94", 00:26:01.452 "strip_size_kb": 64, 00:26:01.452 "state": "online", 00:26:01.452 "raid_level": "concat", 00:26:01.452 "superblock": true, 00:26:01.452 "num_base_bdevs": 4, 00:26:01.452 "num_base_bdevs_discovered": 4, 00:26:01.452 "num_base_bdevs_operational": 4, 00:26:01.452 "base_bdevs_list": [ 00:26:01.452 { 00:26:01.452 "name": "BaseBdev1", 00:26:01.452 "uuid": "539e90b6-6346-b657-b1b8-00f84f59ec54", 00:26:01.452 "is_configured": true, 00:26:01.452 "data_offset": 2048, 00:26:01.452 "data_size": 63488 00:26:01.452 }, 00:26:01.452 { 00:26:01.452 "name": "BaseBdev2", 00:26:01.452 "uuid": "71cfcfd5-6cba-1a50-8baa-9722e8f94389", 00:26:01.452 "is_configured": true, 00:26:01.452 "data_offset": 2048, 00:26:01.452 "data_size": 63488 00:26:01.452 }, 00:26:01.452 { 00:26:01.452 "name": "BaseBdev3", 00:26:01.452 "uuid": "f3ce8b26-f9bb-b250-8f6e-6b40e55950c8", 00:26:01.452 "is_configured": true, 00:26:01.452 "data_offset": 2048, 00:26:01.452 "data_size": 63488 00:26:01.452 }, 00:26:01.452 { 00:26:01.452 "name": "BaseBdev4", 00:26:01.452 "uuid": "008a4ced-efd3-e85d-9efb-4ba1f2f53ae0", 00:26:01.452 "is_configured": true, 00:26:01.452 "data_offset": 2048, 00:26:01.452 "data_size": 63488 00:26:01.452 } 00:26:01.452 ] 00:26:01.452 }' 00:26:01.452 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.452 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.712 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:01.712 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:01.712 [2024-07-15 09:52:29.753927] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1482b1ea0ec0 00:26:02.658 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.917 09:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.176 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.176 "name": "raid_bdev1", 00:26:03.176 "uuid": "f2b4aabc-428f-11ef-a0af-c98d8ee52a94", 00:26:03.176 "strip_size_kb": 64, 00:26:03.176 "state": "online", 00:26:03.176 "raid_level": "concat", 00:26:03.176 "superblock": true, 00:26:03.176 "num_base_bdevs": 4, 00:26:03.176 "num_base_bdevs_discovered": 4, 00:26:03.176 "num_base_bdevs_operational": 4, 00:26:03.176 "base_bdevs_list": [ 00:26:03.176 { 00:26:03.176 "name": "BaseBdev1", 00:26:03.176 "uuid": "539e90b6-6346-b657-b1b8-00f84f59ec54", 00:26:03.176 "is_configured": true, 00:26:03.176 "data_offset": 2048, 00:26:03.176 "data_size": 63488 00:26:03.176 }, 00:26:03.176 { 00:26:03.176 "name": "BaseBdev2", 00:26:03.176 "uuid": "71cfcfd5-6cba-1a50-8baa-9722e8f94389", 00:26:03.176 "is_configured": true, 00:26:03.176 "data_offset": 2048, 00:26:03.176 "data_size": 63488 00:26:03.176 }, 00:26:03.176 { 00:26:03.176 "name": "BaseBdev3", 00:26:03.176 "uuid": "f3ce8b26-f9bb-b250-8f6e-6b40e55950c8", 00:26:03.176 "is_configured": true, 00:26:03.176 "data_offset": 2048, 00:26:03.176 "data_size": 63488 00:26:03.176 }, 00:26:03.176 { 00:26:03.176 "name": "BaseBdev4", 00:26:03.176 "uuid": "008a4ced-efd3-e85d-9efb-4ba1f2f53ae0", 00:26:03.176 "is_configured": true, 00:26:03.176 "data_offset": 2048, 00:26:03.176 "data_size": 63488 00:26:03.176 } 00:26:03.176 ] 00:26:03.176 }' 00:26:03.176 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.176 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.433 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:03.692 [2024-07-15 09:52:31.632817] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:03.692 [2024-07-15 09:52:31.632851] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.692 [2024-07-15 09:52:31.633186] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.692 [2024-07-15 09:52:31.633205] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.692 [2024-07-15 09:52:31.633214] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.692 [2024-07-15 09:52:31.633219] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1482b1e35900 name raid_bdev1, state offline 00:26:03.692 0 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62505 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62505 ']' 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62505 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62505 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:26:03.692 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:26:03.693 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:26:03.693 killing process with pid 62505 00:26:03.693 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62505' 00:26:03.693 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62505 00:26:03.693 [2024-07-15 09:52:31.663129] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.693 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62505 00:26:03.693 [2024-07-15 09:52:31.696614] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ujktgU8Yyl 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:26:03.952 00:26:03.952 real 0m6.344s 00:26:03.952 user 0m9.471s 00:26:03.952 sys 0m1.390s 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:03.952 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.952 ************************************ 00:26:03.952 END TEST raid_read_error_test 00:26:03.952 ************************************ 00:26:03.952 09:52:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:03.952 09:52:31 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:26:03.952 09:52:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:03.952 09:52:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.952 09:52:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:03.952 ************************************ 00:26:03.952 START TEST raid_write_error_test 00:26:03.952 ************************************ 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.952 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.w6e5n9vnEr 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62639 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62639 /var/tmp/spdk-raid.sock 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62639 ']' 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:03.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.953 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.953 [2024-07-15 09:52:32.033290] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:03.953 [2024-07-15 09:52:32.033576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:26:04.888 EAL: TSC is not safe to use in SMP mode 00:26:04.888 EAL: TSC is not invariant 00:26:04.888 [2024-07-15 09:52:32.744812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.888 [2024-07-15 09:52:32.859341] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:04.888 [2024-07-15 09:52:32.861747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.888 [2024-07-15 09:52:32.862429] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:04.888 [2024-07-15 09:52:32.862440] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:04.888 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.888 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:04.888 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:04.888 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:05.152 BaseBdev1_malloc 00:26:05.152 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:05.418 true 00:26:05.418 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:05.418 [2024-07-15 09:52:33.497304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:05.418 [2024-07-15 09:52:33.497377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.418 [2024-07-15 09:52:33.497409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e9ca234780 00:26:05.418 [2024-07-15 09:52:33.497416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.418 [2024-07-15 09:52:33.498141] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.418 [2024-07-15 09:52:33.498171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:05.418 BaseBdev1 00:26:05.418 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:05.418 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:05.676 BaseBdev2_malloc 00:26:05.676 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:05.934 true 00:26:05.934 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:06.192 [2024-07-15 09:52:34.085348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:06.192 [2024-07-15 09:52:34.085415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.192 [2024-07-15 09:52:34.085447] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e9ca234c80 00:26:06.192 [2024-07-15 09:52:34.085455] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.192 [2024-07-15 09:52:34.086209] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.192 [2024-07-15 09:52:34.086237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:06.192 BaseBdev2 00:26:06.192 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:06.192 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:06.192 BaseBdev3_malloc 00:26:06.452 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:06.452 true 00:26:06.452 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:06.711 [2024-07-15 09:52:34.685374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:06.711 [2024-07-15 09:52:34.685440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.711 [2024-07-15 09:52:34.685471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e9ca235180 00:26:06.711 [2024-07-15 09:52:34.685478] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.711 [2024-07-15 09:52:34.686168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.711 [2024-07-15 09:52:34.686196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:06.711 BaseBdev3 00:26:06.711 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:06.711 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:06.970 BaseBdev4_malloc 00:26:06.970 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:07.228 true 00:26:07.228 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:07.228 [2024-07-15 09:52:35.293400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:07.228 [2024-07-15 09:52:35.293466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.228 [2024-07-15 09:52:35.293494] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e9ca235680 00:26:07.228 [2024-07-15 09:52:35.293501] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.228 [2024-07-15 09:52:35.294149] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.228 [2024-07-15 09:52:35.294177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:07.228 BaseBdev4 00:26:07.228 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:07.488 [2024-07-15 09:52:35.489420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.488 [2024-07-15 09:52:35.490044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:07.488 [2024-07-15 09:52:35.490070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:07.488 [2024-07-15 09:52:35.490083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:07.488 [2024-07-15 09:52:35.490144] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11e9ca235900 00:26:07.488 [2024-07-15 09:52:35.490149] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:07.488 [2024-07-15 09:52:35.490191] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e9ca2a0e20 00:26:07.488 [2024-07-15 09:52:35.490262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11e9ca235900 00:26:07.488 [2024-07-15 09:52:35.490265] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11e9ca235900 00:26:07.488 [2024-07-15 09:52:35.490284] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.488 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.748 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.748 "name": "raid_bdev1", 00:26:07.748 "uuid": "f67ca12b-428f-11ef-a0af-c98d8ee52a94", 00:26:07.748 "strip_size_kb": 64, 00:26:07.748 "state": "online", 00:26:07.748 "raid_level": "concat", 00:26:07.749 "superblock": true, 00:26:07.749 "num_base_bdevs": 4, 00:26:07.749 "num_base_bdevs_discovered": 4, 00:26:07.749 "num_base_bdevs_operational": 4, 00:26:07.749 "base_bdevs_list": [ 00:26:07.749 { 00:26:07.749 "name": "BaseBdev1", 00:26:07.749 "uuid": "e53c5d76-a885-3158-adbb-fe1abeda1fa8", 00:26:07.749 "is_configured": true, 00:26:07.749 "data_offset": 2048, 00:26:07.749 "data_size": 63488 00:26:07.749 }, 00:26:07.749 { 00:26:07.749 "name": "BaseBdev2", 00:26:07.749 "uuid": "71c80e3d-2e03-7d5d-98cb-3c798db00f9c", 00:26:07.749 "is_configured": true, 00:26:07.749 "data_offset": 2048, 00:26:07.749 "data_size": 63488 00:26:07.749 }, 00:26:07.749 { 00:26:07.749 "name": "BaseBdev3", 00:26:07.749 "uuid": "3b3bdc96-f68d-555f-8e63-b9bfde1b4957", 00:26:07.749 "is_configured": true, 00:26:07.749 "data_offset": 2048, 00:26:07.749 "data_size": 63488 00:26:07.749 }, 00:26:07.749 { 00:26:07.749 "name": "BaseBdev4", 00:26:07.749 "uuid": "b902195a-d247-315f-ad25-36b2279a0701", 00:26:07.749 "is_configured": true, 00:26:07.749 "data_offset": 2048, 00:26:07.749 "data_size": 63488 00:26:07.749 } 00:26:07.749 ] 00:26:07.749 }' 00:26:07.749 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.749 09:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.012 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:08.012 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:08.012 [2024-07-15 09:52:36.073521] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e9ca2a0ec0 00:26:08.948 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.206 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.465 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.465 "name": "raid_bdev1", 00:26:09.465 "uuid": "f67ca12b-428f-11ef-a0af-c98d8ee52a94", 00:26:09.465 "strip_size_kb": 64, 00:26:09.465 "state": "online", 00:26:09.465 "raid_level": "concat", 00:26:09.465 "superblock": true, 00:26:09.465 "num_base_bdevs": 4, 00:26:09.465 "num_base_bdevs_discovered": 4, 00:26:09.465 "num_base_bdevs_operational": 4, 00:26:09.465 "base_bdevs_list": [ 00:26:09.465 { 00:26:09.465 "name": "BaseBdev1", 00:26:09.465 "uuid": "e53c5d76-a885-3158-adbb-fe1abeda1fa8", 00:26:09.465 "is_configured": true, 00:26:09.465 "data_offset": 2048, 00:26:09.465 "data_size": 63488 00:26:09.465 }, 00:26:09.465 { 00:26:09.465 "name": "BaseBdev2", 00:26:09.465 "uuid": "71c80e3d-2e03-7d5d-98cb-3c798db00f9c", 00:26:09.465 "is_configured": true, 00:26:09.465 "data_offset": 2048, 00:26:09.465 "data_size": 63488 00:26:09.465 }, 00:26:09.465 { 00:26:09.465 "name": "BaseBdev3", 00:26:09.465 "uuid": "3b3bdc96-f68d-555f-8e63-b9bfde1b4957", 00:26:09.465 "is_configured": true, 00:26:09.465 "data_offset": 2048, 00:26:09.465 "data_size": 63488 00:26:09.465 }, 00:26:09.465 { 00:26:09.465 "name": "BaseBdev4", 00:26:09.465 "uuid": "b902195a-d247-315f-ad25-36b2279a0701", 00:26:09.465 "is_configured": true, 00:26:09.465 "data_offset": 2048, 00:26:09.465 "data_size": 63488 00:26:09.465 } 00:26:09.465 ] 00:26:09.465 }' 00:26:09.465 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.465 09:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 09:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:09.982 [2024-07-15 09:52:38.008706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:09.982 [2024-07-15 09:52:38.008740] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.982 [2024-07-15 09:52:38.009051] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.982 [2024-07-15 09:52:38.009060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.982 [2024-07-15 09:52:38.009069] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.982 [2024-07-15 09:52:38.009074] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e9ca235900 name raid_bdev1, state offline 00:26:09.982 0 00:26:09.982 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62639 00:26:09.982 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62639 ']' 00:26:09.982 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62639 00:26:09.982 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:26:09.982 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62639 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62639' 00:26:09.983 killing process with pid 62639 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62639 00:26:09.983 [2024-07-15 09:52:38.043589] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.983 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62639 00:26:09.983 [2024-07-15 09:52:38.077868] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.w6e5n9vnEr 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:26:10.242 00:26:10.242 real 0m6.328s 00:26:10.242 user 0m9.382s 00:26:10.242 sys 0m1.434s 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:10.242 09:52:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.242 ************************************ 00:26:10.242 END TEST raid_write_error_test 00:26:10.242 ************************************ 00:26:10.500 09:52:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:10.500 09:52:38 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:26:10.501 09:52:38 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:26:10.501 09:52:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:10.501 09:52:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.501 09:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.501 ************************************ 00:26:10.501 START TEST raid_state_function_test 00:26:10.501 ************************************ 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=62771 00:26:10.501 Process raid pid: 62771 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62771' 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 62771 /var/tmp/spdk-raid.sock 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 62771 ']' 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:10.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:10.501 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.501 [2024-07-15 09:52:38.415646] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:10.501 [2024-07-15 09:52:38.415978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:26:11.066 EAL: TSC is not safe to use in SMP mode 00:26:11.066 EAL: TSC is not invariant 00:26:11.066 [2024-07-15 09:52:39.122101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.325 [2024-07-15 09:52:39.237722] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:11.325 [2024-07-15 09:52:39.240239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.325 [2024-07-15 09:52:39.240974] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.325 [2024-07-15 09:52:39.240984] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.325 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:11.325 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:26:11.325 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:11.583 [2024-07-15 09:52:39.491951] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.583 [2024-07-15 09:52:39.492014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.583 [2024-07-15 09:52:39.492019] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.583 [2024-07-15 09:52:39.492026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.583 [2024-07-15 09:52:39.492029] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.583 [2024-07-15 09:52:39.492035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.583 [2024-07-15 09:52:39.492037] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:11.583 [2024-07-15 09:52:39.492043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.583 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.842 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.842 "name": "Existed_Raid", 00:26:11.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.842 "strip_size_kb": 0, 00:26:11.842 "state": "configuring", 00:26:11.842 "raid_level": "raid1", 00:26:11.842 "superblock": false, 00:26:11.842 "num_base_bdevs": 4, 00:26:11.842 "num_base_bdevs_discovered": 0, 00:26:11.842 "num_base_bdevs_operational": 4, 00:26:11.842 "base_bdevs_list": [ 00:26:11.842 { 00:26:11.842 "name": "BaseBdev1", 00:26:11.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.842 "is_configured": false, 00:26:11.842 "data_offset": 0, 00:26:11.842 "data_size": 0 00:26:11.842 }, 00:26:11.842 { 00:26:11.842 "name": "BaseBdev2", 00:26:11.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.842 "is_configured": false, 00:26:11.842 "data_offset": 0, 00:26:11.842 "data_size": 0 00:26:11.842 }, 00:26:11.842 { 00:26:11.842 "name": "BaseBdev3", 00:26:11.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.842 "is_configured": false, 00:26:11.842 "data_offset": 0, 00:26:11.842 "data_size": 0 00:26:11.842 }, 00:26:11.842 { 00:26:11.842 "name": "BaseBdev4", 00:26:11.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.842 "is_configured": false, 00:26:11.842 "data_offset": 0, 00:26:11.842 "data_size": 0 00:26:11.842 } 00:26:11.842 ] 00:26:11.842 }' 00:26:11.842 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.842 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.099 09:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:12.357 [2024-07-15 09:52:40.223964] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:12.357 [2024-07-15 09:52:40.224001] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30e003c34500 name Existed_Raid, state configuring 00:26:12.357 09:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:12.357 [2024-07-15 09:52:40.451979] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:12.357 [2024-07-15 09:52:40.452030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:12.357 [2024-07-15 09:52:40.452034] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:12.357 [2024-07-15 09:52:40.452159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:12.357 [2024-07-15 09:52:40.452162] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:12.357 [2024-07-15 09:52:40.452169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:12.357 [2024-07-15 09:52:40.452172] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:12.357 [2024-07-15 09:52:40.452177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:12.615 [2024-07-15 09:52:40.653128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.615 BaseBdev1 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:12.615 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.873 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:13.145 [ 00:26:13.145 { 00:26:13.145 "name": "BaseBdev1", 00:26:13.145 "aliases": [ 00:26:13.145 "f99060e1-428f-11ef-a0af-c98d8ee52a94" 00:26:13.145 ], 00:26:13.145 "product_name": "Malloc disk", 00:26:13.145 "block_size": 512, 00:26:13.145 "num_blocks": 65536, 00:26:13.145 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:13.145 "assigned_rate_limits": { 00:26:13.145 "rw_ios_per_sec": 0, 00:26:13.145 "rw_mbytes_per_sec": 0, 00:26:13.145 "r_mbytes_per_sec": 0, 00:26:13.145 "w_mbytes_per_sec": 0 00:26:13.145 }, 00:26:13.145 "claimed": true, 00:26:13.145 "claim_type": "exclusive_write", 00:26:13.145 "zoned": false, 00:26:13.145 "supported_io_types": { 00:26:13.145 "read": true, 00:26:13.145 "write": true, 00:26:13.145 "unmap": true, 00:26:13.145 "flush": true, 00:26:13.145 "reset": true, 00:26:13.145 "nvme_admin": false, 00:26:13.145 "nvme_io": false, 00:26:13.145 "nvme_io_md": false, 00:26:13.145 "write_zeroes": true, 00:26:13.145 "zcopy": true, 00:26:13.145 "get_zone_info": false, 00:26:13.145 "zone_management": false, 00:26:13.145 "zone_append": false, 00:26:13.145 "compare": false, 00:26:13.145 "compare_and_write": false, 00:26:13.145 "abort": true, 00:26:13.145 "seek_hole": false, 00:26:13.145 "seek_data": false, 00:26:13.145 "copy": true, 00:26:13.145 "nvme_iov_md": false 00:26:13.145 }, 00:26:13.145 "memory_domains": [ 00:26:13.145 { 00:26:13.145 "dma_device_id": "system", 00:26:13.145 "dma_device_type": 1 00:26:13.145 }, 00:26:13.145 { 00:26:13.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.145 "dma_device_type": 2 00:26:13.145 } 00:26:13.145 ], 00:26:13.145 "driver_specific": {} 00:26:13.145 } 00:26:13.145 ] 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.145 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.452 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.452 "name": "Existed_Raid", 00:26:13.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.452 "strip_size_kb": 0, 00:26:13.452 "state": "configuring", 00:26:13.452 "raid_level": "raid1", 00:26:13.452 "superblock": false, 00:26:13.452 "num_base_bdevs": 4, 00:26:13.452 "num_base_bdevs_discovered": 1, 00:26:13.452 "num_base_bdevs_operational": 4, 00:26:13.452 "base_bdevs_list": [ 00:26:13.452 { 00:26:13.452 "name": "BaseBdev1", 00:26:13.452 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:13.452 "is_configured": true, 00:26:13.452 "data_offset": 0, 00:26:13.452 "data_size": 65536 00:26:13.452 }, 00:26:13.452 { 00:26:13.452 "name": "BaseBdev2", 00:26:13.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.452 "is_configured": false, 00:26:13.452 "data_offset": 0, 00:26:13.452 "data_size": 0 00:26:13.452 }, 00:26:13.452 { 00:26:13.452 "name": "BaseBdev3", 00:26:13.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.452 "is_configured": false, 00:26:13.452 "data_offset": 0, 00:26:13.452 "data_size": 0 00:26:13.452 }, 00:26:13.452 { 00:26:13.452 "name": "BaseBdev4", 00:26:13.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.452 "is_configured": false, 00:26:13.452 "data_offset": 0, 00:26:13.452 "data_size": 0 00:26:13.452 } 00:26:13.452 ] 00:26:13.452 }' 00:26:13.452 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.452 09:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.452 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:13.710 [2024-07-15 09:52:41.728031] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:13.710 [2024-07-15 09:52:41.728062] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30e003c34500 name Existed_Raid, state configuring 00:26:13.710 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:13.969 [2024-07-15 09:52:41.924051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.969 [2024-07-15 09:52:41.924925] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:13.969 [2024-07-15 09:52:41.924968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:13.969 [2024-07-15 09:52:41.924973] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:13.969 [2024-07-15 09:52:41.924980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:13.969 [2024-07-15 09:52:41.924983] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:13.969 [2024-07-15 09:52:41.924989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.969 09:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.229 09:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.229 "name": "Existed_Raid", 00:26:14.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.229 "strip_size_kb": 0, 00:26:14.229 "state": "configuring", 00:26:14.229 "raid_level": "raid1", 00:26:14.229 "superblock": false, 00:26:14.229 "num_base_bdevs": 4, 00:26:14.229 "num_base_bdevs_discovered": 1, 00:26:14.229 "num_base_bdevs_operational": 4, 00:26:14.229 "base_bdevs_list": [ 00:26:14.229 { 00:26:14.229 "name": "BaseBdev1", 00:26:14.229 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:14.229 "is_configured": true, 00:26:14.229 "data_offset": 0, 00:26:14.229 "data_size": 65536 00:26:14.229 }, 00:26:14.229 { 00:26:14.229 "name": "BaseBdev2", 00:26:14.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.229 "is_configured": false, 00:26:14.229 "data_offset": 0, 00:26:14.229 "data_size": 0 00:26:14.229 }, 00:26:14.229 { 00:26:14.229 "name": "BaseBdev3", 00:26:14.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.229 "is_configured": false, 00:26:14.229 "data_offset": 0, 00:26:14.229 "data_size": 0 00:26:14.229 }, 00:26:14.229 { 00:26:14.229 "name": "BaseBdev4", 00:26:14.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.229 "is_configured": false, 00:26:14.229 "data_offset": 0, 00:26:14.229 "data_size": 0 00:26:14.229 } 00:26:14.229 ] 00:26:14.229 }' 00:26:14.229 09:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.229 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.488 09:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:14.747 [2024-07-15 09:52:42.600238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:14.747 BaseBdev2 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.747 09:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:15.005 [ 00:26:15.005 { 00:26:15.005 "name": "BaseBdev2", 00:26:15.005 "aliases": [ 00:26:15.005 "fab9a23f-428f-11ef-a0af-c98d8ee52a94" 00:26:15.006 ], 00:26:15.006 "product_name": "Malloc disk", 00:26:15.006 "block_size": 512, 00:26:15.006 "num_blocks": 65536, 00:26:15.006 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:15.006 "assigned_rate_limits": { 00:26:15.006 "rw_ios_per_sec": 0, 00:26:15.006 "rw_mbytes_per_sec": 0, 00:26:15.006 "r_mbytes_per_sec": 0, 00:26:15.006 "w_mbytes_per_sec": 0 00:26:15.006 }, 00:26:15.006 "claimed": true, 00:26:15.006 "claim_type": "exclusive_write", 00:26:15.006 "zoned": false, 00:26:15.006 "supported_io_types": { 00:26:15.006 "read": true, 00:26:15.006 "write": true, 00:26:15.006 "unmap": true, 00:26:15.006 "flush": true, 00:26:15.006 "reset": true, 00:26:15.006 "nvme_admin": false, 00:26:15.006 "nvme_io": false, 00:26:15.006 "nvme_io_md": false, 00:26:15.006 "write_zeroes": true, 00:26:15.006 "zcopy": true, 00:26:15.006 "get_zone_info": false, 00:26:15.006 "zone_management": false, 00:26:15.006 "zone_append": false, 00:26:15.006 "compare": false, 00:26:15.006 "compare_and_write": false, 00:26:15.006 "abort": true, 00:26:15.006 "seek_hole": false, 00:26:15.006 "seek_data": false, 00:26:15.006 "copy": true, 00:26:15.006 "nvme_iov_md": false 00:26:15.006 }, 00:26:15.006 "memory_domains": [ 00:26:15.006 { 00:26:15.006 "dma_device_id": "system", 00:26:15.006 "dma_device_type": 1 00:26:15.006 }, 00:26:15.006 { 00:26:15.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.006 "dma_device_type": 2 00:26:15.006 } 00:26:15.006 ], 00:26:15.006 "driver_specific": {} 00:26:15.006 } 00:26:15.006 ] 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.006 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.265 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.265 "name": "Existed_Raid", 00:26:15.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.265 "strip_size_kb": 0, 00:26:15.265 "state": "configuring", 00:26:15.265 "raid_level": "raid1", 00:26:15.265 "superblock": false, 00:26:15.265 "num_base_bdevs": 4, 00:26:15.265 "num_base_bdevs_discovered": 2, 00:26:15.265 "num_base_bdevs_operational": 4, 00:26:15.265 "base_bdevs_list": [ 00:26:15.265 { 00:26:15.265 "name": "BaseBdev1", 00:26:15.265 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:15.265 "is_configured": true, 00:26:15.265 "data_offset": 0, 00:26:15.265 "data_size": 65536 00:26:15.265 }, 00:26:15.265 { 00:26:15.265 "name": "BaseBdev2", 00:26:15.265 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:15.265 "is_configured": true, 00:26:15.265 "data_offset": 0, 00:26:15.265 "data_size": 65536 00:26:15.265 }, 00:26:15.265 { 00:26:15.265 "name": "BaseBdev3", 00:26:15.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.265 "is_configured": false, 00:26:15.265 "data_offset": 0, 00:26:15.265 "data_size": 0 00:26:15.265 }, 00:26:15.265 { 00:26:15.265 "name": "BaseBdev4", 00:26:15.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.265 "is_configured": false, 00:26:15.265 "data_offset": 0, 00:26:15.265 "data_size": 0 00:26:15.265 } 00:26:15.266 ] 00:26:15.266 }' 00:26:15.266 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.266 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.525 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:15.784 [2024-07-15 09:52:43.640242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:15.784 BaseBdev3 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.784 09:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:16.043 [ 00:26:16.043 { 00:26:16.043 "name": "BaseBdev3", 00:26:16.043 "aliases": [ 00:26:16.043 "fb585532-428f-11ef-a0af-c98d8ee52a94" 00:26:16.043 ], 00:26:16.043 "product_name": "Malloc disk", 00:26:16.043 "block_size": 512, 00:26:16.043 "num_blocks": 65536, 00:26:16.043 "uuid": "fb585532-428f-11ef-a0af-c98d8ee52a94", 00:26:16.043 "assigned_rate_limits": { 00:26:16.043 "rw_ios_per_sec": 0, 00:26:16.043 "rw_mbytes_per_sec": 0, 00:26:16.043 "r_mbytes_per_sec": 0, 00:26:16.043 "w_mbytes_per_sec": 0 00:26:16.043 }, 00:26:16.043 "claimed": true, 00:26:16.044 "claim_type": "exclusive_write", 00:26:16.044 "zoned": false, 00:26:16.044 "supported_io_types": { 00:26:16.044 "read": true, 00:26:16.044 "write": true, 00:26:16.044 "unmap": true, 00:26:16.044 "flush": true, 00:26:16.044 "reset": true, 00:26:16.044 "nvme_admin": false, 00:26:16.044 "nvme_io": false, 00:26:16.044 "nvme_io_md": false, 00:26:16.044 "write_zeroes": true, 00:26:16.044 "zcopy": true, 00:26:16.044 "get_zone_info": false, 00:26:16.044 "zone_management": false, 00:26:16.044 "zone_append": false, 00:26:16.044 "compare": false, 00:26:16.044 "compare_and_write": false, 00:26:16.044 "abort": true, 00:26:16.044 "seek_hole": false, 00:26:16.044 "seek_data": false, 00:26:16.044 "copy": true, 00:26:16.044 "nvme_iov_md": false 00:26:16.044 }, 00:26:16.044 "memory_domains": [ 00:26:16.044 { 00:26:16.044 "dma_device_id": "system", 00:26:16.044 "dma_device_type": 1 00:26:16.044 }, 00:26:16.044 { 00:26:16.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.044 "dma_device_type": 2 00:26:16.044 } 00:26:16.044 ], 00:26:16.044 "driver_specific": {} 00:26:16.044 } 00:26:16.044 ] 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.044 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.303 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.303 "name": "Existed_Raid", 00:26:16.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.303 "strip_size_kb": 0, 00:26:16.303 "state": "configuring", 00:26:16.303 "raid_level": "raid1", 00:26:16.303 "superblock": false, 00:26:16.303 "num_base_bdevs": 4, 00:26:16.303 "num_base_bdevs_discovered": 3, 00:26:16.303 "num_base_bdevs_operational": 4, 00:26:16.303 "base_bdevs_list": [ 00:26:16.303 { 00:26:16.303 "name": "BaseBdev1", 00:26:16.303 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:16.303 "is_configured": true, 00:26:16.303 "data_offset": 0, 00:26:16.303 "data_size": 65536 00:26:16.303 }, 00:26:16.303 { 00:26:16.303 "name": "BaseBdev2", 00:26:16.303 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:16.303 "is_configured": true, 00:26:16.303 "data_offset": 0, 00:26:16.303 "data_size": 65536 00:26:16.303 }, 00:26:16.303 { 00:26:16.303 "name": "BaseBdev3", 00:26:16.303 "uuid": "fb585532-428f-11ef-a0af-c98d8ee52a94", 00:26:16.303 "is_configured": true, 00:26:16.303 "data_offset": 0, 00:26:16.303 "data_size": 65536 00:26:16.303 }, 00:26:16.303 { 00:26:16.303 "name": "BaseBdev4", 00:26:16.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.303 "is_configured": false, 00:26:16.303 "data_offset": 0, 00:26:16.303 "data_size": 0 00:26:16.303 } 00:26:16.303 ] 00:26:16.303 }' 00:26:16.303 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.303 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:16.618 [2024-07-15 09:52:44.684278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:16.618 [2024-07-15 09:52:44.684307] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x30e003c34a00 00:26:16.618 [2024-07-15 09:52:44.684311] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:16.618 [2024-07-15 09:52:44.684338] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30e003c97e20 00:26:16.618 [2024-07-15 09:52:44.684439] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30e003c34a00 00:26:16.618 [2024-07-15 09:52:44.684443] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x30e003c34a00 00:26:16.618 [2024-07-15 09:52:44.684474] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.618 BaseBdev4 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:16.618 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:16.877 09:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:17.136 [ 00:26:17.136 { 00:26:17.136 "name": "BaseBdev4", 00:26:17.137 "aliases": [ 00:26:17.137 "fbf7a441-428f-11ef-a0af-c98d8ee52a94" 00:26:17.137 ], 00:26:17.137 "product_name": "Malloc disk", 00:26:17.137 "block_size": 512, 00:26:17.137 "num_blocks": 65536, 00:26:17.137 "uuid": "fbf7a441-428f-11ef-a0af-c98d8ee52a94", 00:26:17.137 "assigned_rate_limits": { 00:26:17.137 "rw_ios_per_sec": 0, 00:26:17.137 "rw_mbytes_per_sec": 0, 00:26:17.137 "r_mbytes_per_sec": 0, 00:26:17.137 "w_mbytes_per_sec": 0 00:26:17.137 }, 00:26:17.137 "claimed": true, 00:26:17.137 "claim_type": "exclusive_write", 00:26:17.137 "zoned": false, 00:26:17.137 "supported_io_types": { 00:26:17.137 "read": true, 00:26:17.137 "write": true, 00:26:17.137 "unmap": true, 00:26:17.137 "flush": true, 00:26:17.137 "reset": true, 00:26:17.137 "nvme_admin": false, 00:26:17.137 "nvme_io": false, 00:26:17.137 "nvme_io_md": false, 00:26:17.137 "write_zeroes": true, 00:26:17.137 "zcopy": true, 00:26:17.137 "get_zone_info": false, 00:26:17.137 "zone_management": false, 00:26:17.137 "zone_append": false, 00:26:17.137 "compare": false, 00:26:17.137 "compare_and_write": false, 00:26:17.137 "abort": true, 00:26:17.137 "seek_hole": false, 00:26:17.137 "seek_data": false, 00:26:17.137 "copy": true, 00:26:17.137 "nvme_iov_md": false 00:26:17.137 }, 00:26:17.137 "memory_domains": [ 00:26:17.137 { 00:26:17.137 "dma_device_id": "system", 00:26:17.137 "dma_device_type": 1 00:26:17.137 }, 00:26:17.137 { 00:26:17.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.137 "dma_device_type": 2 00:26:17.137 } 00:26:17.137 ], 00:26:17.137 "driver_specific": {} 00:26:17.137 } 00:26:17.137 ] 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.137 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.396 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.396 "name": "Existed_Raid", 00:26:17.396 "uuid": "fbf7a959-428f-11ef-a0af-c98d8ee52a94", 00:26:17.396 "strip_size_kb": 0, 00:26:17.396 "state": "online", 00:26:17.396 "raid_level": "raid1", 00:26:17.396 "superblock": false, 00:26:17.396 "num_base_bdevs": 4, 00:26:17.396 "num_base_bdevs_discovered": 4, 00:26:17.396 "num_base_bdevs_operational": 4, 00:26:17.396 "base_bdevs_list": [ 00:26:17.396 { 00:26:17.396 "name": "BaseBdev1", 00:26:17.396 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:17.396 "is_configured": true, 00:26:17.396 "data_offset": 0, 00:26:17.396 "data_size": 65536 00:26:17.396 }, 00:26:17.396 { 00:26:17.396 "name": "BaseBdev2", 00:26:17.396 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:17.396 "is_configured": true, 00:26:17.396 "data_offset": 0, 00:26:17.396 "data_size": 65536 00:26:17.396 }, 00:26:17.396 { 00:26:17.396 "name": "BaseBdev3", 00:26:17.396 "uuid": "fb585532-428f-11ef-a0af-c98d8ee52a94", 00:26:17.396 "is_configured": true, 00:26:17.396 "data_offset": 0, 00:26:17.396 "data_size": 65536 00:26:17.396 }, 00:26:17.396 { 00:26:17.396 "name": "BaseBdev4", 00:26:17.396 "uuid": "fbf7a441-428f-11ef-a0af-c98d8ee52a94", 00:26:17.396 "is_configured": true, 00:26:17.396 "data_offset": 0, 00:26:17.396 "data_size": 65536 00:26:17.396 } 00:26:17.396 ] 00:26:17.396 }' 00:26:17.396 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.396 09:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:17.655 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:17.914 [2024-07-15 09:52:45.764278] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.914 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:17.914 "name": "Existed_Raid", 00:26:17.914 "aliases": [ 00:26:17.914 "fbf7a959-428f-11ef-a0af-c98d8ee52a94" 00:26:17.914 ], 00:26:17.914 "product_name": "Raid Volume", 00:26:17.914 "block_size": 512, 00:26:17.914 "num_blocks": 65536, 00:26:17.914 "uuid": "fbf7a959-428f-11ef-a0af-c98d8ee52a94", 00:26:17.914 "assigned_rate_limits": { 00:26:17.914 "rw_ios_per_sec": 0, 00:26:17.914 "rw_mbytes_per_sec": 0, 00:26:17.914 "r_mbytes_per_sec": 0, 00:26:17.914 "w_mbytes_per_sec": 0 00:26:17.914 }, 00:26:17.914 "claimed": false, 00:26:17.914 "zoned": false, 00:26:17.914 "supported_io_types": { 00:26:17.914 "read": true, 00:26:17.914 "write": true, 00:26:17.914 "unmap": false, 00:26:17.914 "flush": false, 00:26:17.914 "reset": true, 00:26:17.914 "nvme_admin": false, 00:26:17.914 "nvme_io": false, 00:26:17.914 "nvme_io_md": false, 00:26:17.914 "write_zeroes": true, 00:26:17.914 "zcopy": false, 00:26:17.914 "get_zone_info": false, 00:26:17.914 "zone_management": false, 00:26:17.914 "zone_append": false, 00:26:17.914 "compare": false, 00:26:17.914 "compare_and_write": false, 00:26:17.914 "abort": false, 00:26:17.914 "seek_hole": false, 00:26:17.914 "seek_data": false, 00:26:17.914 "copy": false, 00:26:17.914 "nvme_iov_md": false 00:26:17.914 }, 00:26:17.914 "memory_domains": [ 00:26:17.914 { 00:26:17.914 "dma_device_id": "system", 00:26:17.914 "dma_device_type": 1 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.914 "dma_device_type": 2 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "system", 00:26:17.914 "dma_device_type": 1 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.914 "dma_device_type": 2 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "system", 00:26:17.914 "dma_device_type": 1 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.914 "dma_device_type": 2 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "system", 00:26:17.914 "dma_device_type": 1 00:26:17.914 }, 00:26:17.914 { 00:26:17.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.914 "dma_device_type": 2 00:26:17.914 } 00:26:17.914 ], 00:26:17.915 "driver_specific": { 00:26:17.915 "raid": { 00:26:17.915 "uuid": "fbf7a959-428f-11ef-a0af-c98d8ee52a94", 00:26:17.915 "strip_size_kb": 0, 00:26:17.915 "state": "online", 00:26:17.915 "raid_level": "raid1", 00:26:17.915 "superblock": false, 00:26:17.915 "num_base_bdevs": 4, 00:26:17.915 "num_base_bdevs_discovered": 4, 00:26:17.915 "num_base_bdevs_operational": 4, 00:26:17.915 "base_bdevs_list": [ 00:26:17.915 { 00:26:17.915 "name": "BaseBdev1", 00:26:17.915 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:17.915 "is_configured": true, 00:26:17.915 "data_offset": 0, 00:26:17.915 "data_size": 65536 00:26:17.915 }, 00:26:17.915 { 00:26:17.915 "name": "BaseBdev2", 00:26:17.915 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:17.915 "is_configured": true, 00:26:17.915 "data_offset": 0, 00:26:17.915 "data_size": 65536 00:26:17.915 }, 00:26:17.915 { 00:26:17.915 "name": "BaseBdev3", 00:26:17.915 "uuid": "fb585532-428f-11ef-a0af-c98d8ee52a94", 00:26:17.915 "is_configured": true, 00:26:17.915 "data_offset": 0, 00:26:17.915 "data_size": 65536 00:26:17.915 }, 00:26:17.915 { 00:26:17.915 "name": "BaseBdev4", 00:26:17.915 "uuid": "fbf7a441-428f-11ef-a0af-c98d8ee52a94", 00:26:17.915 "is_configured": true, 00:26:17.915 "data_offset": 0, 00:26:17.915 "data_size": 65536 00:26:17.915 } 00:26:17.915 ] 00:26:17.915 } 00:26:17.915 } 00:26:17.915 }' 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:17.915 BaseBdev2 00:26:17.915 BaseBdev3 00:26:17.915 BaseBdev4' 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.915 "name": "BaseBdev1", 00:26:17.915 "aliases": [ 00:26:17.915 "f99060e1-428f-11ef-a0af-c98d8ee52a94" 00:26:17.915 ], 00:26:17.915 "product_name": "Malloc disk", 00:26:17.915 "block_size": 512, 00:26:17.915 "num_blocks": 65536, 00:26:17.915 "uuid": "f99060e1-428f-11ef-a0af-c98d8ee52a94", 00:26:17.915 "assigned_rate_limits": { 00:26:17.915 "rw_ios_per_sec": 0, 00:26:17.915 "rw_mbytes_per_sec": 0, 00:26:17.915 "r_mbytes_per_sec": 0, 00:26:17.915 "w_mbytes_per_sec": 0 00:26:17.915 }, 00:26:17.915 "claimed": true, 00:26:17.915 "claim_type": "exclusive_write", 00:26:17.915 "zoned": false, 00:26:17.915 "supported_io_types": { 00:26:17.915 "read": true, 00:26:17.915 "write": true, 00:26:17.915 "unmap": true, 00:26:17.915 "flush": true, 00:26:17.915 "reset": true, 00:26:17.915 "nvme_admin": false, 00:26:17.915 "nvme_io": false, 00:26:17.915 "nvme_io_md": false, 00:26:17.915 "write_zeroes": true, 00:26:17.915 "zcopy": true, 00:26:17.915 "get_zone_info": false, 00:26:17.915 "zone_management": false, 00:26:17.915 "zone_append": false, 00:26:17.915 "compare": false, 00:26:17.915 "compare_and_write": false, 00:26:17.915 "abort": true, 00:26:17.915 "seek_hole": false, 00:26:17.915 "seek_data": false, 00:26:17.915 "copy": true, 00:26:17.915 "nvme_iov_md": false 00:26:17.915 }, 00:26:17.915 "memory_domains": [ 00:26:17.915 { 00:26:17.915 "dma_device_id": "system", 00:26:17.915 "dma_device_type": 1 00:26:17.915 }, 00:26:17.915 { 00:26:17.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.915 "dma_device_type": 2 00:26:17.915 } 00:26:17.915 ], 00:26:17.915 "driver_specific": {} 00:26:17.915 }' 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.915 09:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.915 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.915 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.177 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.177 "name": "BaseBdev2", 00:26:18.177 "aliases": [ 00:26:18.177 "fab9a23f-428f-11ef-a0af-c98d8ee52a94" 00:26:18.177 ], 00:26:18.177 "product_name": "Malloc disk", 00:26:18.177 "block_size": 512, 00:26:18.177 "num_blocks": 65536, 00:26:18.177 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:18.177 "assigned_rate_limits": { 00:26:18.177 "rw_ios_per_sec": 0, 00:26:18.177 "rw_mbytes_per_sec": 0, 00:26:18.177 "r_mbytes_per_sec": 0, 00:26:18.177 "w_mbytes_per_sec": 0 00:26:18.177 }, 00:26:18.177 "claimed": true, 00:26:18.177 "claim_type": "exclusive_write", 00:26:18.177 "zoned": false, 00:26:18.177 "supported_io_types": { 00:26:18.177 "read": true, 00:26:18.177 "write": true, 00:26:18.177 "unmap": true, 00:26:18.177 "flush": true, 00:26:18.177 "reset": true, 00:26:18.177 "nvme_admin": false, 00:26:18.177 "nvme_io": false, 00:26:18.177 "nvme_io_md": false, 00:26:18.177 "write_zeroes": true, 00:26:18.177 "zcopy": true, 00:26:18.177 "get_zone_info": false, 00:26:18.177 "zone_management": false, 00:26:18.177 "zone_append": false, 00:26:18.177 "compare": false, 00:26:18.177 "compare_and_write": false, 00:26:18.177 "abort": true, 00:26:18.177 "seek_hole": false, 00:26:18.177 "seek_data": false, 00:26:18.177 "copy": true, 00:26:18.177 "nvme_iov_md": false 00:26:18.177 }, 00:26:18.178 "memory_domains": [ 00:26:18.178 { 00:26:18.178 "dma_device_id": "system", 00:26:18.178 "dma_device_type": 1 00:26:18.178 }, 00:26:18.178 { 00:26:18.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.178 "dma_device_type": 2 00:26:18.178 } 00:26:18.178 ], 00:26:18.178 "driver_specific": {} 00:26:18.178 }' 00:26:18.178 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:18.445 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.703 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.703 "name": "BaseBdev3", 00:26:18.703 "aliases": [ 00:26:18.703 "fb585532-428f-11ef-a0af-c98d8ee52a94" 00:26:18.703 ], 00:26:18.703 "product_name": "Malloc disk", 00:26:18.704 "block_size": 512, 00:26:18.704 "num_blocks": 65536, 00:26:18.704 "uuid": "fb585532-428f-11ef-a0af-c98d8ee52a94", 00:26:18.704 "assigned_rate_limits": { 00:26:18.704 "rw_ios_per_sec": 0, 00:26:18.704 "rw_mbytes_per_sec": 0, 00:26:18.704 "r_mbytes_per_sec": 0, 00:26:18.704 "w_mbytes_per_sec": 0 00:26:18.704 }, 00:26:18.704 "claimed": true, 00:26:18.704 "claim_type": "exclusive_write", 00:26:18.704 "zoned": false, 00:26:18.704 "supported_io_types": { 00:26:18.704 "read": true, 00:26:18.704 "write": true, 00:26:18.704 "unmap": true, 00:26:18.704 "flush": true, 00:26:18.704 "reset": true, 00:26:18.704 "nvme_admin": false, 00:26:18.704 "nvme_io": false, 00:26:18.704 "nvme_io_md": false, 00:26:18.704 "write_zeroes": true, 00:26:18.704 "zcopy": true, 00:26:18.704 "get_zone_info": false, 00:26:18.704 "zone_management": false, 00:26:18.704 "zone_append": false, 00:26:18.704 "compare": false, 00:26:18.704 "compare_and_write": false, 00:26:18.704 "abort": true, 00:26:18.704 "seek_hole": false, 00:26:18.704 "seek_data": false, 00:26:18.704 "copy": true, 00:26:18.704 "nvme_iov_md": false 00:26:18.704 }, 00:26:18.704 "memory_domains": [ 00:26:18.704 { 00:26:18.704 "dma_device_id": "system", 00:26:18.704 "dma_device_type": 1 00:26:18.704 }, 00:26:18.704 { 00:26:18.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.704 "dma_device_type": 2 00:26:18.704 } 00:26:18.704 ], 00:26:18.704 "driver_specific": {} 00:26:18.704 }' 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:18.704 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.962 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.962 "name": "BaseBdev4", 00:26:18.962 "aliases": [ 00:26:18.962 "fbf7a441-428f-11ef-a0af-c98d8ee52a94" 00:26:18.962 ], 00:26:18.962 "product_name": "Malloc disk", 00:26:18.962 "block_size": 512, 00:26:18.962 "num_blocks": 65536, 00:26:18.962 "uuid": "fbf7a441-428f-11ef-a0af-c98d8ee52a94", 00:26:18.962 "assigned_rate_limits": { 00:26:18.962 "rw_ios_per_sec": 0, 00:26:18.962 "rw_mbytes_per_sec": 0, 00:26:18.962 "r_mbytes_per_sec": 0, 00:26:18.962 "w_mbytes_per_sec": 0 00:26:18.962 }, 00:26:18.962 "claimed": true, 00:26:18.962 "claim_type": "exclusive_write", 00:26:18.962 "zoned": false, 00:26:18.962 "supported_io_types": { 00:26:18.962 "read": true, 00:26:18.962 "write": true, 00:26:18.962 "unmap": true, 00:26:18.962 "flush": true, 00:26:18.962 "reset": true, 00:26:18.962 "nvme_admin": false, 00:26:18.963 "nvme_io": false, 00:26:18.963 "nvme_io_md": false, 00:26:18.963 "write_zeroes": true, 00:26:18.963 "zcopy": true, 00:26:18.963 "get_zone_info": false, 00:26:18.963 "zone_management": false, 00:26:18.963 "zone_append": false, 00:26:18.963 "compare": false, 00:26:18.963 "compare_and_write": false, 00:26:18.963 "abort": true, 00:26:18.963 "seek_hole": false, 00:26:18.963 "seek_data": false, 00:26:18.963 "copy": true, 00:26:18.963 "nvme_iov_md": false 00:26:18.963 }, 00:26:18.963 "memory_domains": [ 00:26:18.963 { 00:26:18.963 "dma_device_id": "system", 00:26:18.963 "dma_device_type": 1 00:26:18.963 }, 00:26:18.963 { 00:26:18.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.963 "dma_device_type": 2 00:26:18.963 } 00:26:18.963 ], 00:26:18.963 "driver_specific": {} 00:26:18.963 }' 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.963 09:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:19.223 [2024-07-15 09:52:47.104319] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.223 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.481 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.481 "name": "Existed_Raid", 00:26:19.481 "uuid": "fbf7a959-428f-11ef-a0af-c98d8ee52a94", 00:26:19.481 "strip_size_kb": 0, 00:26:19.481 "state": "online", 00:26:19.481 "raid_level": "raid1", 00:26:19.481 "superblock": false, 00:26:19.481 "num_base_bdevs": 4, 00:26:19.481 "num_base_bdevs_discovered": 3, 00:26:19.481 "num_base_bdevs_operational": 3, 00:26:19.481 "base_bdevs_list": [ 00:26:19.481 { 00:26:19.481 "name": null, 00:26:19.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.481 "is_configured": false, 00:26:19.481 "data_offset": 0, 00:26:19.481 "data_size": 65536 00:26:19.481 }, 00:26:19.481 { 00:26:19.481 "name": "BaseBdev2", 00:26:19.481 "uuid": "fab9a23f-428f-11ef-a0af-c98d8ee52a94", 00:26:19.481 "is_configured": true, 00:26:19.481 "data_offset": 0, 00:26:19.481 "data_size": 65536 00:26:19.481 }, 00:26:19.481 { 00:26:19.481 "name": "BaseBdev3", 00:26:19.481 "uuid": "fb585532-428f-11ef-a0af-c98d8ee52a94", 00:26:19.481 "is_configured": true, 00:26:19.481 "data_offset": 0, 00:26:19.481 "data_size": 65536 00:26:19.481 }, 00:26:19.481 { 00:26:19.481 "name": "BaseBdev4", 00:26:19.481 "uuid": "fbf7a441-428f-11ef-a0af-c98d8ee52a94", 00:26:19.481 "is_configured": true, 00:26:19.481 "data_offset": 0, 00:26:19.481 "data_size": 65536 00:26:19.481 } 00:26:19.481 ] 00:26:19.481 }' 00:26:19.481 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.481 09:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.739 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:19.739 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:19.739 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.739 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:19.739 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:19.740 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:19.740 09:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:19.999 [2024-07-15 09:52:48.024827] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:19.999 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:19.999 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:19.999 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.999 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:20.257 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:20.257 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:20.257 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:20.517 [2024-07-15 09:52:48.449236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:20.517 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:20.517 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:20.517 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.517 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:20.777 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:20.777 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:20.777 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:21.036 [2024-07-15 09:52:48.885821] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:21.036 [2024-07-15 09:52:48.885856] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.036 [2024-07-15 09:52:48.894265] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.036 [2024-07-15 09:52:48.894283] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.036 [2024-07-15 09:52:48.894287] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30e003c34a00 name Existed_Raid, state offline 00:26:21.036 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:21.036 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:21.036 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:21.036 09:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.036 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:21.036 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:21.036 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:21.036 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:21.036 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:21.036 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:21.295 BaseBdev2 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:21.295 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:21.554 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:21.813 [ 00:26:21.813 { 00:26:21.813 "name": "BaseBdev2", 00:26:21.813 "aliases": [ 00:26:21.813 "feb84d33-428f-11ef-a0af-c98d8ee52a94" 00:26:21.813 ], 00:26:21.813 "product_name": "Malloc disk", 00:26:21.813 "block_size": 512, 00:26:21.813 "num_blocks": 65536, 00:26:21.813 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:21.813 "assigned_rate_limits": { 00:26:21.813 "rw_ios_per_sec": 0, 00:26:21.813 "rw_mbytes_per_sec": 0, 00:26:21.813 "r_mbytes_per_sec": 0, 00:26:21.813 "w_mbytes_per_sec": 0 00:26:21.813 }, 00:26:21.813 "claimed": false, 00:26:21.813 "zoned": false, 00:26:21.813 "supported_io_types": { 00:26:21.813 "read": true, 00:26:21.813 "write": true, 00:26:21.813 "unmap": true, 00:26:21.813 "flush": true, 00:26:21.813 "reset": true, 00:26:21.813 "nvme_admin": false, 00:26:21.813 "nvme_io": false, 00:26:21.813 "nvme_io_md": false, 00:26:21.813 "write_zeroes": true, 00:26:21.813 "zcopy": true, 00:26:21.813 "get_zone_info": false, 00:26:21.813 "zone_management": false, 00:26:21.813 "zone_append": false, 00:26:21.813 "compare": false, 00:26:21.813 "compare_and_write": false, 00:26:21.813 "abort": true, 00:26:21.813 "seek_hole": false, 00:26:21.813 "seek_data": false, 00:26:21.813 "copy": true, 00:26:21.813 "nvme_iov_md": false 00:26:21.813 }, 00:26:21.813 "memory_domains": [ 00:26:21.813 { 00:26:21.813 "dma_device_id": "system", 00:26:21.813 "dma_device_type": 1 00:26:21.813 }, 00:26:21.813 { 00:26:21.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.813 "dma_device_type": 2 00:26:21.813 } 00:26:21.813 ], 00:26:21.813 "driver_specific": {} 00:26:21.813 } 00:26:21.813 ] 00:26:21.813 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:21.813 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:21.813 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:21.813 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:22.072 BaseBdev3 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:22.072 09:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:22.072 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:22.331 [ 00:26:22.332 { 00:26:22.332 "name": "BaseBdev3", 00:26:22.332 "aliases": [ 00:26:22.332 "ff195a22-428f-11ef-a0af-c98d8ee52a94" 00:26:22.332 ], 00:26:22.332 "product_name": "Malloc disk", 00:26:22.332 "block_size": 512, 00:26:22.332 "num_blocks": 65536, 00:26:22.332 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:22.332 "assigned_rate_limits": { 00:26:22.332 "rw_ios_per_sec": 0, 00:26:22.332 "rw_mbytes_per_sec": 0, 00:26:22.332 "r_mbytes_per_sec": 0, 00:26:22.332 "w_mbytes_per_sec": 0 00:26:22.332 }, 00:26:22.332 "claimed": false, 00:26:22.332 "zoned": false, 00:26:22.332 "supported_io_types": { 00:26:22.332 "read": true, 00:26:22.332 "write": true, 00:26:22.332 "unmap": true, 00:26:22.332 "flush": true, 00:26:22.332 "reset": true, 00:26:22.332 "nvme_admin": false, 00:26:22.332 "nvme_io": false, 00:26:22.332 "nvme_io_md": false, 00:26:22.332 "write_zeroes": true, 00:26:22.332 "zcopy": true, 00:26:22.332 "get_zone_info": false, 00:26:22.332 "zone_management": false, 00:26:22.332 "zone_append": false, 00:26:22.332 "compare": false, 00:26:22.332 "compare_and_write": false, 00:26:22.332 "abort": true, 00:26:22.332 "seek_hole": false, 00:26:22.332 "seek_data": false, 00:26:22.332 "copy": true, 00:26:22.332 "nvme_iov_md": false 00:26:22.332 }, 00:26:22.332 "memory_domains": [ 00:26:22.332 { 00:26:22.332 "dma_device_id": "system", 00:26:22.332 "dma_device_type": 1 00:26:22.332 }, 00:26:22.332 { 00:26:22.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.332 "dma_device_type": 2 00:26:22.332 } 00:26:22.332 ], 00:26:22.332 "driver_specific": {} 00:26:22.332 } 00:26:22.332 ] 00:26:22.332 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:22.332 09:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:22.332 09:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:22.332 09:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:22.590 BaseBdev4 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:22.590 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:22.850 09:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:23.109 [ 00:26:23.109 { 00:26:23.109 "name": "BaseBdev4", 00:26:23.109 "aliases": [ 00:26:23.109 "ff7eae0d-428f-11ef-a0af-c98d8ee52a94" 00:26:23.109 ], 00:26:23.109 "product_name": "Malloc disk", 00:26:23.109 "block_size": 512, 00:26:23.109 "num_blocks": 65536, 00:26:23.109 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:23.109 "assigned_rate_limits": { 00:26:23.109 "rw_ios_per_sec": 0, 00:26:23.109 "rw_mbytes_per_sec": 0, 00:26:23.109 "r_mbytes_per_sec": 0, 00:26:23.109 "w_mbytes_per_sec": 0 00:26:23.109 }, 00:26:23.109 "claimed": false, 00:26:23.109 "zoned": false, 00:26:23.109 "supported_io_types": { 00:26:23.109 "read": true, 00:26:23.109 "write": true, 00:26:23.109 "unmap": true, 00:26:23.109 "flush": true, 00:26:23.109 "reset": true, 00:26:23.109 "nvme_admin": false, 00:26:23.109 "nvme_io": false, 00:26:23.109 "nvme_io_md": false, 00:26:23.109 "write_zeroes": true, 00:26:23.109 "zcopy": true, 00:26:23.109 "get_zone_info": false, 00:26:23.109 "zone_management": false, 00:26:23.109 "zone_append": false, 00:26:23.109 "compare": false, 00:26:23.109 "compare_and_write": false, 00:26:23.109 "abort": true, 00:26:23.109 "seek_hole": false, 00:26:23.109 "seek_data": false, 00:26:23.109 "copy": true, 00:26:23.109 "nvme_iov_md": false 00:26:23.109 }, 00:26:23.109 "memory_domains": [ 00:26:23.109 { 00:26:23.109 "dma_device_id": "system", 00:26:23.109 "dma_device_type": 1 00:26:23.109 }, 00:26:23.109 { 00:26:23.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.109 "dma_device_type": 2 00:26:23.109 } 00:26:23.109 ], 00:26:23.109 "driver_specific": {} 00:26:23.109 } 00:26:23.109 ] 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:23.109 [2024-07-15 09:52:51.186357] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:23.109 [2024-07-15 09:52:51.186421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:23.109 [2024-07-15 09:52:51.186429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:23.109 [2024-07-15 09:52:51.187092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:23.109 [2024-07-15 09:52:51.187112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.109 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.677 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.677 "name": "Existed_Raid", 00:26:23.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.677 "strip_size_kb": 0, 00:26:23.677 "state": "configuring", 00:26:23.677 "raid_level": "raid1", 00:26:23.677 "superblock": false, 00:26:23.677 "num_base_bdevs": 4, 00:26:23.677 "num_base_bdevs_discovered": 3, 00:26:23.677 "num_base_bdevs_operational": 4, 00:26:23.677 "base_bdevs_list": [ 00:26:23.677 { 00:26:23.677 "name": "BaseBdev1", 00:26:23.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.677 "is_configured": false, 00:26:23.677 "data_offset": 0, 00:26:23.677 "data_size": 0 00:26:23.677 }, 00:26:23.677 { 00:26:23.677 "name": "BaseBdev2", 00:26:23.677 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:23.677 "is_configured": true, 00:26:23.677 "data_offset": 0, 00:26:23.677 "data_size": 65536 00:26:23.677 }, 00:26:23.677 { 00:26:23.677 "name": "BaseBdev3", 00:26:23.677 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:23.677 "is_configured": true, 00:26:23.677 "data_offset": 0, 00:26:23.677 "data_size": 65536 00:26:23.677 }, 00:26:23.677 { 00:26:23.677 "name": "BaseBdev4", 00:26:23.677 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:23.677 "is_configured": true, 00:26:23.677 "data_offset": 0, 00:26:23.677 "data_size": 65536 00:26:23.677 } 00:26:23.677 ] 00:26:23.677 }' 00:26:23.677 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.677 09:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:23.937 [2024-07-15 09:52:51.970381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.937 09:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.196 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.196 "name": "Existed_Raid", 00:26:24.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.196 "strip_size_kb": 0, 00:26:24.196 "state": "configuring", 00:26:24.196 "raid_level": "raid1", 00:26:24.196 "superblock": false, 00:26:24.196 "num_base_bdevs": 4, 00:26:24.196 "num_base_bdevs_discovered": 2, 00:26:24.196 "num_base_bdevs_operational": 4, 00:26:24.196 "base_bdevs_list": [ 00:26:24.196 { 00:26:24.196 "name": "BaseBdev1", 00:26:24.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.196 "is_configured": false, 00:26:24.196 "data_offset": 0, 00:26:24.196 "data_size": 0 00:26:24.197 }, 00:26:24.197 { 00:26:24.197 "name": null, 00:26:24.197 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:24.197 "is_configured": false, 00:26:24.197 "data_offset": 0, 00:26:24.197 "data_size": 65536 00:26:24.197 }, 00:26:24.197 { 00:26:24.197 "name": "BaseBdev3", 00:26:24.197 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:24.197 "is_configured": true, 00:26:24.197 "data_offset": 0, 00:26:24.197 "data_size": 65536 00:26:24.197 }, 00:26:24.197 { 00:26:24.197 "name": "BaseBdev4", 00:26:24.197 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:24.197 "is_configured": true, 00:26:24.197 "data_offset": 0, 00:26:24.197 "data_size": 65536 00:26:24.197 } 00:26:24.197 ] 00:26:24.197 }' 00:26:24.197 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.197 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.456 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.456 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:24.714 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:24.714 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:24.974 [2024-07-15 09:52:52.850553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.974 BaseBdev1 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:24.974 09:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:24.974 09:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:25.233 [ 00:26:25.233 { 00:26:25.233 "name": "BaseBdev1", 00:26:25.233 "aliases": [ 00:26:25.233 "00d5b639-4290-11ef-a0af-c98d8ee52a94" 00:26:25.233 ], 00:26:25.233 "product_name": "Malloc disk", 00:26:25.233 "block_size": 512, 00:26:25.233 "num_blocks": 65536, 00:26:25.233 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:25.233 "assigned_rate_limits": { 00:26:25.233 "rw_ios_per_sec": 0, 00:26:25.233 "rw_mbytes_per_sec": 0, 00:26:25.233 "r_mbytes_per_sec": 0, 00:26:25.233 "w_mbytes_per_sec": 0 00:26:25.233 }, 00:26:25.233 "claimed": true, 00:26:25.233 "claim_type": "exclusive_write", 00:26:25.233 "zoned": false, 00:26:25.233 "supported_io_types": { 00:26:25.233 "read": true, 00:26:25.233 "write": true, 00:26:25.233 "unmap": true, 00:26:25.233 "flush": true, 00:26:25.233 "reset": true, 00:26:25.233 "nvme_admin": false, 00:26:25.233 "nvme_io": false, 00:26:25.233 "nvme_io_md": false, 00:26:25.233 "write_zeroes": true, 00:26:25.233 "zcopy": true, 00:26:25.233 "get_zone_info": false, 00:26:25.233 "zone_management": false, 00:26:25.233 "zone_append": false, 00:26:25.233 "compare": false, 00:26:25.233 "compare_and_write": false, 00:26:25.233 "abort": true, 00:26:25.233 "seek_hole": false, 00:26:25.233 "seek_data": false, 00:26:25.233 "copy": true, 00:26:25.233 "nvme_iov_md": false 00:26:25.233 }, 00:26:25.233 "memory_domains": [ 00:26:25.233 { 00:26:25.233 "dma_device_id": "system", 00:26:25.233 "dma_device_type": 1 00:26:25.233 }, 00:26:25.233 { 00:26:25.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.233 "dma_device_type": 2 00:26:25.233 } 00:26:25.233 ], 00:26:25.233 "driver_specific": {} 00:26:25.233 } 00:26:25.233 ] 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.233 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.493 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.493 "name": "Existed_Raid", 00:26:25.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.493 "strip_size_kb": 0, 00:26:25.493 "state": "configuring", 00:26:25.493 "raid_level": "raid1", 00:26:25.493 "superblock": false, 00:26:25.493 "num_base_bdevs": 4, 00:26:25.493 "num_base_bdevs_discovered": 3, 00:26:25.493 "num_base_bdevs_operational": 4, 00:26:25.493 "base_bdevs_list": [ 00:26:25.493 { 00:26:25.493 "name": "BaseBdev1", 00:26:25.493 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:25.493 "is_configured": true, 00:26:25.493 "data_offset": 0, 00:26:25.493 "data_size": 65536 00:26:25.493 }, 00:26:25.493 { 00:26:25.493 "name": null, 00:26:25.493 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:25.493 "is_configured": false, 00:26:25.493 "data_offset": 0, 00:26:25.493 "data_size": 65536 00:26:25.493 }, 00:26:25.493 { 00:26:25.493 "name": "BaseBdev3", 00:26:25.493 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:25.493 "is_configured": true, 00:26:25.493 "data_offset": 0, 00:26:25.493 "data_size": 65536 00:26:25.493 }, 00:26:25.493 { 00:26:25.493 "name": "BaseBdev4", 00:26:25.493 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:25.493 "is_configured": true, 00:26:25.493 "data_offset": 0, 00:26:25.493 "data_size": 65536 00:26:25.493 } 00:26:25.493 ] 00:26:25.493 }' 00:26:25.493 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.493 09:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.751 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:25.751 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.010 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:26.010 09:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:26.010 [2024-07-15 09:52:54.090491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.010 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.269 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:26.269 "name": "Existed_Raid", 00:26:26.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.269 "strip_size_kb": 0, 00:26:26.269 "state": "configuring", 00:26:26.269 "raid_level": "raid1", 00:26:26.269 "superblock": false, 00:26:26.269 "num_base_bdevs": 4, 00:26:26.269 "num_base_bdevs_discovered": 2, 00:26:26.269 "num_base_bdevs_operational": 4, 00:26:26.269 "base_bdevs_list": [ 00:26:26.269 { 00:26:26.269 "name": "BaseBdev1", 00:26:26.269 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:26.269 "is_configured": true, 00:26:26.269 "data_offset": 0, 00:26:26.269 "data_size": 65536 00:26:26.269 }, 00:26:26.269 { 00:26:26.269 "name": null, 00:26:26.269 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:26.269 "is_configured": false, 00:26:26.269 "data_offset": 0, 00:26:26.269 "data_size": 65536 00:26:26.269 }, 00:26:26.269 { 00:26:26.269 "name": null, 00:26:26.269 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:26.269 "is_configured": false, 00:26:26.269 "data_offset": 0, 00:26:26.269 "data_size": 65536 00:26:26.269 }, 00:26:26.269 { 00:26:26.269 "name": "BaseBdev4", 00:26:26.269 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:26.269 "is_configured": true, 00:26:26.269 "data_offset": 0, 00:26:26.269 "data_size": 65536 00:26:26.269 } 00:26:26.269 ] 00:26:26.269 }' 00:26:26.269 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:26.269 09:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.528 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.528 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:26.787 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:26.787 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:27.046 [2024-07-15 09:52:54.966535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.046 09:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.305 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.305 "name": "Existed_Raid", 00:26:27.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.305 "strip_size_kb": 0, 00:26:27.305 "state": "configuring", 00:26:27.305 "raid_level": "raid1", 00:26:27.305 "superblock": false, 00:26:27.305 "num_base_bdevs": 4, 00:26:27.305 "num_base_bdevs_discovered": 3, 00:26:27.305 "num_base_bdevs_operational": 4, 00:26:27.305 "base_bdevs_list": [ 00:26:27.305 { 00:26:27.305 "name": "BaseBdev1", 00:26:27.305 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:27.305 "is_configured": true, 00:26:27.305 "data_offset": 0, 00:26:27.305 "data_size": 65536 00:26:27.305 }, 00:26:27.305 { 00:26:27.305 "name": null, 00:26:27.305 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:27.305 "is_configured": false, 00:26:27.305 "data_offset": 0, 00:26:27.305 "data_size": 65536 00:26:27.305 }, 00:26:27.305 { 00:26:27.305 "name": "BaseBdev3", 00:26:27.305 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:27.305 "is_configured": true, 00:26:27.305 "data_offset": 0, 00:26:27.305 "data_size": 65536 00:26:27.305 }, 00:26:27.305 { 00:26:27.305 "name": "BaseBdev4", 00:26:27.305 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:27.305 "is_configured": true, 00:26:27.305 "data_offset": 0, 00:26:27.305 "data_size": 65536 00:26:27.305 } 00:26:27.305 ] 00:26:27.305 }' 00:26:27.305 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.305 09:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.564 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.564 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:27.564 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:27.564 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:27.822 [2024-07-15 09:52:55.818596] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.822 09:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.080 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.080 "name": "Existed_Raid", 00:26:28.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.080 "strip_size_kb": 0, 00:26:28.080 "state": "configuring", 00:26:28.080 "raid_level": "raid1", 00:26:28.080 "superblock": false, 00:26:28.080 "num_base_bdevs": 4, 00:26:28.080 "num_base_bdevs_discovered": 2, 00:26:28.080 "num_base_bdevs_operational": 4, 00:26:28.080 "base_bdevs_list": [ 00:26:28.080 { 00:26:28.080 "name": null, 00:26:28.080 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:28.080 "is_configured": false, 00:26:28.080 "data_offset": 0, 00:26:28.080 "data_size": 65536 00:26:28.080 }, 00:26:28.080 { 00:26:28.080 "name": null, 00:26:28.080 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:28.080 "is_configured": false, 00:26:28.080 "data_offset": 0, 00:26:28.080 "data_size": 65536 00:26:28.080 }, 00:26:28.080 { 00:26:28.080 "name": "BaseBdev3", 00:26:28.080 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:28.080 "is_configured": true, 00:26:28.080 "data_offset": 0, 00:26:28.080 "data_size": 65536 00:26:28.080 }, 00:26:28.080 { 00:26:28.080 "name": "BaseBdev4", 00:26:28.080 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:28.080 "is_configured": true, 00:26:28.080 "data_offset": 0, 00:26:28.080 "data_size": 65536 00:26:28.080 } 00:26:28.080 ] 00:26:28.080 }' 00:26:28.080 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.080 09:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.338 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.339 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:28.598 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:28.598 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:28.856 [2024-07-15 09:52:56.738925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.856 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.116 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.116 "name": "Existed_Raid", 00:26:29.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.116 "strip_size_kb": 0, 00:26:29.116 "state": "configuring", 00:26:29.116 "raid_level": "raid1", 00:26:29.116 "superblock": false, 00:26:29.116 "num_base_bdevs": 4, 00:26:29.116 "num_base_bdevs_discovered": 3, 00:26:29.116 "num_base_bdevs_operational": 4, 00:26:29.116 "base_bdevs_list": [ 00:26:29.116 { 00:26:29.116 "name": null, 00:26:29.116 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:29.116 "is_configured": false, 00:26:29.116 "data_offset": 0, 00:26:29.116 "data_size": 65536 00:26:29.116 }, 00:26:29.116 { 00:26:29.116 "name": "BaseBdev2", 00:26:29.116 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:29.116 "is_configured": true, 00:26:29.116 "data_offset": 0, 00:26:29.116 "data_size": 65536 00:26:29.116 }, 00:26:29.116 { 00:26:29.116 "name": "BaseBdev3", 00:26:29.116 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:29.116 "is_configured": true, 00:26:29.116 "data_offset": 0, 00:26:29.116 "data_size": 65536 00:26:29.116 }, 00:26:29.116 { 00:26:29.116 "name": "BaseBdev4", 00:26:29.116 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:29.116 "is_configured": true, 00:26:29.116 "data_offset": 0, 00:26:29.116 "data_size": 65536 00:26:29.116 } 00:26:29.116 ] 00:26:29.116 }' 00:26:29.116 09:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.116 09:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.376 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.376 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:29.635 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:29.635 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:29.635 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.635 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 00d5b639-4290-11ef-a0af-c98d8ee52a94 00:26:29.895 [2024-07-15 09:52:57.919097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:29.895 [2024-07-15 09:52:57.919129] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x30e003c34f00 00:26:29.895 [2024-07-15 09:52:57.919133] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:29.895 [2024-07-15 09:52:57.919154] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30e003c97e20 00:26:29.895 [2024-07-15 09:52:57.919228] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30e003c34f00 00:26:29.895 [2024-07-15 09:52:57.919231] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x30e003c34f00 00:26:29.895 [2024-07-15 09:52:57.919261] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.895 NewBaseBdev 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:29.895 09:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:30.154 09:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:30.414 [ 00:26:30.414 { 00:26:30.414 "name": "NewBaseBdev", 00:26:30.414 "aliases": [ 00:26:30.414 "00d5b639-4290-11ef-a0af-c98d8ee52a94" 00:26:30.414 ], 00:26:30.414 "product_name": "Malloc disk", 00:26:30.414 "block_size": 512, 00:26:30.414 "num_blocks": 65536, 00:26:30.414 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:30.414 "assigned_rate_limits": { 00:26:30.414 "rw_ios_per_sec": 0, 00:26:30.414 "rw_mbytes_per_sec": 0, 00:26:30.414 "r_mbytes_per_sec": 0, 00:26:30.414 "w_mbytes_per_sec": 0 00:26:30.414 }, 00:26:30.414 "claimed": true, 00:26:30.414 "claim_type": "exclusive_write", 00:26:30.414 "zoned": false, 00:26:30.414 "supported_io_types": { 00:26:30.414 "read": true, 00:26:30.414 "write": true, 00:26:30.414 "unmap": true, 00:26:30.414 "flush": true, 00:26:30.414 "reset": true, 00:26:30.414 "nvme_admin": false, 00:26:30.414 "nvme_io": false, 00:26:30.414 "nvme_io_md": false, 00:26:30.414 "write_zeroes": true, 00:26:30.414 "zcopy": true, 00:26:30.414 "get_zone_info": false, 00:26:30.414 "zone_management": false, 00:26:30.414 "zone_append": false, 00:26:30.414 "compare": false, 00:26:30.414 "compare_and_write": false, 00:26:30.414 "abort": true, 00:26:30.414 "seek_hole": false, 00:26:30.414 "seek_data": false, 00:26:30.414 "copy": true, 00:26:30.414 "nvme_iov_md": false 00:26:30.414 }, 00:26:30.414 "memory_domains": [ 00:26:30.414 { 00:26:30.414 "dma_device_id": "system", 00:26:30.414 "dma_device_type": 1 00:26:30.414 }, 00:26:30.414 { 00:26:30.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.414 "dma_device_type": 2 00:26:30.414 } 00:26:30.414 ], 00:26:30.414 "driver_specific": {} 00:26:30.414 } 00:26:30.414 ] 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.414 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.674 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.674 "name": "Existed_Raid", 00:26:30.674 "uuid": "03db225c-4290-11ef-a0af-c98d8ee52a94", 00:26:30.674 "strip_size_kb": 0, 00:26:30.674 "state": "online", 00:26:30.674 "raid_level": "raid1", 00:26:30.674 "superblock": false, 00:26:30.674 "num_base_bdevs": 4, 00:26:30.674 "num_base_bdevs_discovered": 4, 00:26:30.674 "num_base_bdevs_operational": 4, 00:26:30.674 "base_bdevs_list": [ 00:26:30.674 { 00:26:30.674 "name": "NewBaseBdev", 00:26:30.674 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:30.674 "is_configured": true, 00:26:30.674 "data_offset": 0, 00:26:30.674 "data_size": 65536 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "name": "BaseBdev2", 00:26:30.674 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:30.674 "is_configured": true, 00:26:30.674 "data_offset": 0, 00:26:30.674 "data_size": 65536 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "name": "BaseBdev3", 00:26:30.674 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:30.674 "is_configured": true, 00:26:30.674 "data_offset": 0, 00:26:30.674 "data_size": 65536 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "name": "BaseBdev4", 00:26:30.674 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:30.674 "is_configured": true, 00:26:30.674 "data_offset": 0, 00:26:30.674 "data_size": 65536 00:26:30.674 } 00:26:30.674 ] 00:26:30.674 }' 00:26:30.674 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.674 09:52:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:30.933 09:52:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:31.192 [2024-07-15 09:52:59.051057] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:31.192 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:31.192 "name": "Existed_Raid", 00:26:31.192 "aliases": [ 00:26:31.192 "03db225c-4290-11ef-a0af-c98d8ee52a94" 00:26:31.192 ], 00:26:31.192 "product_name": "Raid Volume", 00:26:31.192 "block_size": 512, 00:26:31.192 "num_blocks": 65536, 00:26:31.192 "uuid": "03db225c-4290-11ef-a0af-c98d8ee52a94", 00:26:31.192 "assigned_rate_limits": { 00:26:31.192 "rw_ios_per_sec": 0, 00:26:31.192 "rw_mbytes_per_sec": 0, 00:26:31.192 "r_mbytes_per_sec": 0, 00:26:31.192 "w_mbytes_per_sec": 0 00:26:31.192 }, 00:26:31.192 "claimed": false, 00:26:31.192 "zoned": false, 00:26:31.193 "supported_io_types": { 00:26:31.193 "read": true, 00:26:31.193 "write": true, 00:26:31.193 "unmap": false, 00:26:31.193 "flush": false, 00:26:31.193 "reset": true, 00:26:31.193 "nvme_admin": false, 00:26:31.193 "nvme_io": false, 00:26:31.193 "nvme_io_md": false, 00:26:31.193 "write_zeroes": true, 00:26:31.193 "zcopy": false, 00:26:31.193 "get_zone_info": false, 00:26:31.193 "zone_management": false, 00:26:31.193 "zone_append": false, 00:26:31.193 "compare": false, 00:26:31.193 "compare_and_write": false, 00:26:31.193 "abort": false, 00:26:31.193 "seek_hole": false, 00:26:31.193 "seek_data": false, 00:26:31.193 "copy": false, 00:26:31.193 "nvme_iov_md": false 00:26:31.193 }, 00:26:31.193 "memory_domains": [ 00:26:31.193 { 00:26:31.193 "dma_device_id": "system", 00:26:31.193 "dma_device_type": 1 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.193 "dma_device_type": 2 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "system", 00:26:31.193 "dma_device_type": 1 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.193 "dma_device_type": 2 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "system", 00:26:31.193 "dma_device_type": 1 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.193 "dma_device_type": 2 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "system", 00:26:31.193 "dma_device_type": 1 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.193 "dma_device_type": 2 00:26:31.193 } 00:26:31.193 ], 00:26:31.193 "driver_specific": { 00:26:31.193 "raid": { 00:26:31.193 "uuid": "03db225c-4290-11ef-a0af-c98d8ee52a94", 00:26:31.193 "strip_size_kb": 0, 00:26:31.193 "state": "online", 00:26:31.193 "raid_level": "raid1", 00:26:31.193 "superblock": false, 00:26:31.193 "num_base_bdevs": 4, 00:26:31.193 "num_base_bdevs_discovered": 4, 00:26:31.193 "num_base_bdevs_operational": 4, 00:26:31.193 "base_bdevs_list": [ 00:26:31.193 { 00:26:31.193 "name": "NewBaseBdev", 00:26:31.193 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:31.193 "is_configured": true, 00:26:31.193 "data_offset": 0, 00:26:31.193 "data_size": 65536 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "name": "BaseBdev2", 00:26:31.193 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:31.193 "is_configured": true, 00:26:31.193 "data_offset": 0, 00:26:31.193 "data_size": 65536 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "name": "BaseBdev3", 00:26:31.193 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:31.193 "is_configured": true, 00:26:31.193 "data_offset": 0, 00:26:31.193 "data_size": 65536 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "name": "BaseBdev4", 00:26:31.193 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:31.193 "is_configured": true, 00:26:31.193 "data_offset": 0, 00:26:31.193 "data_size": 65536 00:26:31.193 } 00:26:31.193 ] 00:26:31.193 } 00:26:31.193 } 00:26:31.193 }' 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:31.193 BaseBdev2 00:26:31.193 BaseBdev3 00:26:31.193 BaseBdev4' 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:31.193 "name": "NewBaseBdev", 00:26:31.193 "aliases": [ 00:26:31.193 "00d5b639-4290-11ef-a0af-c98d8ee52a94" 00:26:31.193 ], 00:26:31.193 "product_name": "Malloc disk", 00:26:31.193 "block_size": 512, 00:26:31.193 "num_blocks": 65536, 00:26:31.193 "uuid": "00d5b639-4290-11ef-a0af-c98d8ee52a94", 00:26:31.193 "assigned_rate_limits": { 00:26:31.193 "rw_ios_per_sec": 0, 00:26:31.193 "rw_mbytes_per_sec": 0, 00:26:31.193 "r_mbytes_per_sec": 0, 00:26:31.193 "w_mbytes_per_sec": 0 00:26:31.193 }, 00:26:31.193 "claimed": true, 00:26:31.193 "claim_type": "exclusive_write", 00:26:31.193 "zoned": false, 00:26:31.193 "supported_io_types": { 00:26:31.193 "read": true, 00:26:31.193 "write": true, 00:26:31.193 "unmap": true, 00:26:31.193 "flush": true, 00:26:31.193 "reset": true, 00:26:31.193 "nvme_admin": false, 00:26:31.193 "nvme_io": false, 00:26:31.193 "nvme_io_md": false, 00:26:31.193 "write_zeroes": true, 00:26:31.193 "zcopy": true, 00:26:31.193 "get_zone_info": false, 00:26:31.193 "zone_management": false, 00:26:31.193 "zone_append": false, 00:26:31.193 "compare": false, 00:26:31.193 "compare_and_write": false, 00:26:31.193 "abort": true, 00:26:31.193 "seek_hole": false, 00:26:31.193 "seek_data": false, 00:26:31.193 "copy": true, 00:26:31.193 "nvme_iov_md": false 00:26:31.193 }, 00:26:31.193 "memory_domains": [ 00:26:31.193 { 00:26:31.193 "dma_device_id": "system", 00:26:31.193 "dma_device_type": 1 00:26:31.193 }, 00:26:31.193 { 00:26:31.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.193 "dma_device_type": 2 00:26:31.193 } 00:26:31.193 ], 00:26:31.193 "driver_specific": {} 00:26:31.193 }' 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:31.193 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:31.452 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:31.711 "name": "BaseBdev2", 00:26:31.711 "aliases": [ 00:26:31.711 "feb84d33-428f-11ef-a0af-c98d8ee52a94" 00:26:31.711 ], 00:26:31.711 "product_name": "Malloc disk", 00:26:31.711 "block_size": 512, 00:26:31.711 "num_blocks": 65536, 00:26:31.711 "uuid": "feb84d33-428f-11ef-a0af-c98d8ee52a94", 00:26:31.711 "assigned_rate_limits": { 00:26:31.711 "rw_ios_per_sec": 0, 00:26:31.711 "rw_mbytes_per_sec": 0, 00:26:31.711 "r_mbytes_per_sec": 0, 00:26:31.711 "w_mbytes_per_sec": 0 00:26:31.711 }, 00:26:31.711 "claimed": true, 00:26:31.711 "claim_type": "exclusive_write", 00:26:31.711 "zoned": false, 00:26:31.711 "supported_io_types": { 00:26:31.711 "read": true, 00:26:31.711 "write": true, 00:26:31.711 "unmap": true, 00:26:31.711 "flush": true, 00:26:31.711 "reset": true, 00:26:31.711 "nvme_admin": false, 00:26:31.711 "nvme_io": false, 00:26:31.711 "nvme_io_md": false, 00:26:31.711 "write_zeroes": true, 00:26:31.711 "zcopy": true, 00:26:31.711 "get_zone_info": false, 00:26:31.711 "zone_management": false, 00:26:31.711 "zone_append": false, 00:26:31.711 "compare": false, 00:26:31.711 "compare_and_write": false, 00:26:31.711 "abort": true, 00:26:31.711 "seek_hole": false, 00:26:31.711 "seek_data": false, 00:26:31.711 "copy": true, 00:26:31.711 "nvme_iov_md": false 00:26:31.711 }, 00:26:31.711 "memory_domains": [ 00:26:31.711 { 00:26:31.711 "dma_device_id": "system", 00:26:31.711 "dma_device_type": 1 00:26:31.711 }, 00:26:31.711 { 00:26:31.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.711 "dma_device_type": 2 00:26:31.711 } 00:26:31.711 ], 00:26:31.711 "driver_specific": {} 00:26:31.711 }' 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.711 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:31.712 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:31.712 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:31.712 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:32.022 "name": "BaseBdev3", 00:26:32.022 "aliases": [ 00:26:32.022 "ff195a22-428f-11ef-a0af-c98d8ee52a94" 00:26:32.022 ], 00:26:32.022 "product_name": "Malloc disk", 00:26:32.022 "block_size": 512, 00:26:32.022 "num_blocks": 65536, 00:26:32.022 "uuid": "ff195a22-428f-11ef-a0af-c98d8ee52a94", 00:26:32.022 "assigned_rate_limits": { 00:26:32.022 "rw_ios_per_sec": 0, 00:26:32.022 "rw_mbytes_per_sec": 0, 00:26:32.022 "r_mbytes_per_sec": 0, 00:26:32.022 "w_mbytes_per_sec": 0 00:26:32.022 }, 00:26:32.022 "claimed": true, 00:26:32.022 "claim_type": "exclusive_write", 00:26:32.022 "zoned": false, 00:26:32.022 "supported_io_types": { 00:26:32.022 "read": true, 00:26:32.022 "write": true, 00:26:32.022 "unmap": true, 00:26:32.022 "flush": true, 00:26:32.022 "reset": true, 00:26:32.022 "nvme_admin": false, 00:26:32.022 "nvme_io": false, 00:26:32.022 "nvme_io_md": false, 00:26:32.022 "write_zeroes": true, 00:26:32.022 "zcopy": true, 00:26:32.022 "get_zone_info": false, 00:26:32.022 "zone_management": false, 00:26:32.022 "zone_append": false, 00:26:32.022 "compare": false, 00:26:32.022 "compare_and_write": false, 00:26:32.022 "abort": true, 00:26:32.022 "seek_hole": false, 00:26:32.022 "seek_data": false, 00:26:32.022 "copy": true, 00:26:32.022 "nvme_iov_md": false 00:26:32.022 }, 00:26:32.022 "memory_domains": [ 00:26:32.022 { 00:26:32.022 "dma_device_id": "system", 00:26:32.022 "dma_device_type": 1 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.022 "dma_device_type": 2 00:26:32.022 } 00:26:32.022 ], 00:26:32.022 "driver_specific": {} 00:26:32.022 }' 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:32.022 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:32.303 "name": "BaseBdev4", 00:26:32.303 "aliases": [ 00:26:32.303 "ff7eae0d-428f-11ef-a0af-c98d8ee52a94" 00:26:32.303 ], 00:26:32.303 "product_name": "Malloc disk", 00:26:32.303 "block_size": 512, 00:26:32.303 "num_blocks": 65536, 00:26:32.303 "uuid": "ff7eae0d-428f-11ef-a0af-c98d8ee52a94", 00:26:32.303 "assigned_rate_limits": { 00:26:32.303 "rw_ios_per_sec": 0, 00:26:32.303 "rw_mbytes_per_sec": 0, 00:26:32.303 "r_mbytes_per_sec": 0, 00:26:32.303 "w_mbytes_per_sec": 0 00:26:32.303 }, 00:26:32.303 "claimed": true, 00:26:32.303 "claim_type": "exclusive_write", 00:26:32.303 "zoned": false, 00:26:32.303 "supported_io_types": { 00:26:32.303 "read": true, 00:26:32.303 "write": true, 00:26:32.303 "unmap": true, 00:26:32.303 "flush": true, 00:26:32.303 "reset": true, 00:26:32.303 "nvme_admin": false, 00:26:32.303 "nvme_io": false, 00:26:32.303 "nvme_io_md": false, 00:26:32.303 "write_zeroes": true, 00:26:32.303 "zcopy": true, 00:26:32.303 "get_zone_info": false, 00:26:32.303 "zone_management": false, 00:26:32.303 "zone_append": false, 00:26:32.303 "compare": false, 00:26:32.303 "compare_and_write": false, 00:26:32.303 "abort": true, 00:26:32.303 "seek_hole": false, 00:26:32.303 "seek_data": false, 00:26:32.303 "copy": true, 00:26:32.303 "nvme_iov_md": false 00:26:32.303 }, 00:26:32.303 "memory_domains": [ 00:26:32.303 { 00:26:32.303 "dma_device_id": "system", 00:26:32.303 "dma_device_type": 1 00:26:32.303 }, 00:26:32.303 { 00:26:32.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.303 "dma_device_type": 2 00:26:32.303 } 00:26:32.303 ], 00:26:32.303 "driver_specific": {} 00:26:32.303 }' 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:32.303 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:32.562 [2024-07-15 09:53:00.439111] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:32.562 [2024-07-15 09:53:00.439141] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.562 [2024-07-15 09:53:00.439159] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.562 [2024-07-15 09:53:00.439262] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:32.562 [2024-07-15 09:53:00.439266] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30e003c34f00 name Existed_Raid, state offline 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 62771 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 62771 ']' 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 62771 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 62771 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:26:32.562 killing process with pid 62771 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62771' 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 62771 00:26:32.562 [2024-07-15 09:53:00.470361] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:32.562 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 62771 00:26:32.562 [2024-07-15 09:53:00.504613] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:32.821 00:26:32.821 real 0m22.359s 00:26:32.821 user 0m39.472s 00:26:32.821 sys 0m4.444s 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.821 ************************************ 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.821 END TEST raid_state_function_test 00:26:32.821 ************************************ 00:26:32.821 09:53:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:32.821 09:53:00 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:26:32.821 09:53:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:32.821 09:53:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.821 09:53:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:32.821 ************************************ 00:26:32.821 START TEST raid_state_function_test_sb 00:26:32.821 ************************************ 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63570 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63570' 00:26:32.821 Process raid pid: 63570 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63570 /var/tmp/spdk-raid.sock 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63570 ']' 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.821 09:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.821 [2024-07-15 09:53:00.829477] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:32.821 [2024-07-15 09:53:00.829789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:26:33.753 EAL: TSC is not safe to use in SMP mode 00:26:33.753 EAL: TSC is not invariant 00:26:33.753 [2024-07-15 09:53:01.532531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.753 [2024-07-15 09:53:01.647172] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:33.753 [2024-07-15 09:53:01.649652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.753 [2024-07-15 09:53:01.650370] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:33.753 [2024-07-15 09:53:01.650381] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:33.753 09:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.753 09:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:33.753 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:34.011 [2024-07-15 09:53:01.953501] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:34.011 [2024-07-15 09:53:01.953574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:34.011 [2024-07-15 09:53:01.953579] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:34.011 [2024-07-15 09:53:01.953587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:34.011 [2024-07-15 09:53:01.953590] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:34.011 [2024-07-15 09:53:01.953596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:34.011 [2024-07-15 09:53:01.953599] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:34.011 [2024-07-15 09:53:01.953605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.011 09:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:34.269 09:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.269 "name": "Existed_Raid", 00:26:34.269 "uuid": "0642ba4d-4290-11ef-a0af-c98d8ee52a94", 00:26:34.269 "strip_size_kb": 0, 00:26:34.269 "state": "configuring", 00:26:34.269 "raid_level": "raid1", 00:26:34.269 "superblock": true, 00:26:34.269 "num_base_bdevs": 4, 00:26:34.269 "num_base_bdevs_discovered": 0, 00:26:34.269 "num_base_bdevs_operational": 4, 00:26:34.269 "base_bdevs_list": [ 00:26:34.269 { 00:26:34.269 "name": "BaseBdev1", 00:26:34.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.269 "is_configured": false, 00:26:34.269 "data_offset": 0, 00:26:34.270 "data_size": 0 00:26:34.270 }, 00:26:34.270 { 00:26:34.270 "name": "BaseBdev2", 00:26:34.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.270 "is_configured": false, 00:26:34.270 "data_offset": 0, 00:26:34.270 "data_size": 0 00:26:34.270 }, 00:26:34.270 { 00:26:34.270 "name": "BaseBdev3", 00:26:34.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.270 "is_configured": false, 00:26:34.270 "data_offset": 0, 00:26:34.270 "data_size": 0 00:26:34.270 }, 00:26:34.270 { 00:26:34.270 "name": "BaseBdev4", 00:26:34.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.270 "is_configured": false, 00:26:34.270 "data_offset": 0, 00:26:34.270 "data_size": 0 00:26:34.270 } 00:26:34.270 ] 00:26:34.270 }' 00:26:34.270 09:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.270 09:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:34.528 09:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:34.786 [2024-07-15 09:53:02.645517] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:34.786 [2024-07-15 09:53:02.645569] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x133049e34500 name Existed_Raid, state configuring 00:26:34.786 09:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:34.786 [2024-07-15 09:53:02.861534] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:34.786 [2024-07-15 09:53:02.861601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:34.787 [2024-07-15 09:53:02.861606] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:34.787 [2024-07-15 09:53:02.861614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:34.787 [2024-07-15 09:53:02.861617] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:34.787 [2024-07-15 09:53:02.861623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:34.787 [2024-07-15 09:53:02.861626] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:34.787 [2024-07-15 09:53:02.861632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:34.787 09:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:35.050 [2024-07-15 09:53:03.126674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:35.050 BaseBdev1 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:35.050 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:35.620 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:35.620 [ 00:26:35.620 { 00:26:35.621 "name": "BaseBdev1", 00:26:35.621 "aliases": [ 00:26:35.621 "06f5909d-4290-11ef-a0af-c98d8ee52a94" 00:26:35.621 ], 00:26:35.621 "product_name": "Malloc disk", 00:26:35.621 "block_size": 512, 00:26:35.621 "num_blocks": 65536, 00:26:35.621 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:35.621 "assigned_rate_limits": { 00:26:35.621 "rw_ios_per_sec": 0, 00:26:35.621 "rw_mbytes_per_sec": 0, 00:26:35.621 "r_mbytes_per_sec": 0, 00:26:35.621 "w_mbytes_per_sec": 0 00:26:35.621 }, 00:26:35.621 "claimed": true, 00:26:35.621 "claim_type": "exclusive_write", 00:26:35.621 "zoned": false, 00:26:35.621 "supported_io_types": { 00:26:35.621 "read": true, 00:26:35.621 "write": true, 00:26:35.621 "unmap": true, 00:26:35.621 "flush": true, 00:26:35.621 "reset": true, 00:26:35.621 "nvme_admin": false, 00:26:35.621 "nvme_io": false, 00:26:35.621 "nvme_io_md": false, 00:26:35.621 "write_zeroes": true, 00:26:35.621 "zcopy": true, 00:26:35.621 "get_zone_info": false, 00:26:35.621 "zone_management": false, 00:26:35.621 "zone_append": false, 00:26:35.621 "compare": false, 00:26:35.621 "compare_and_write": false, 00:26:35.621 "abort": true, 00:26:35.621 "seek_hole": false, 00:26:35.621 "seek_data": false, 00:26:35.621 "copy": true, 00:26:35.621 "nvme_iov_md": false 00:26:35.621 }, 00:26:35.621 "memory_domains": [ 00:26:35.621 { 00:26:35.621 "dma_device_id": "system", 00:26:35.621 "dma_device_type": 1 00:26:35.621 }, 00:26:35.621 { 00:26:35.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.621 "dma_device_type": 2 00:26:35.621 } 00:26:35.621 ], 00:26:35.621 "driver_specific": {} 00:26:35.621 } 00:26:35.621 ] 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.621 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.880 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:35.880 "name": "Existed_Raid", 00:26:35.880 "uuid": "06cd4852-4290-11ef-a0af-c98d8ee52a94", 00:26:35.880 "strip_size_kb": 0, 00:26:35.880 "state": "configuring", 00:26:35.880 "raid_level": "raid1", 00:26:35.880 "superblock": true, 00:26:35.880 "num_base_bdevs": 4, 00:26:35.880 "num_base_bdevs_discovered": 1, 00:26:35.880 "num_base_bdevs_operational": 4, 00:26:35.880 "base_bdevs_list": [ 00:26:35.880 { 00:26:35.880 "name": "BaseBdev1", 00:26:35.880 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:35.880 "is_configured": true, 00:26:35.880 "data_offset": 2048, 00:26:35.880 "data_size": 63488 00:26:35.880 }, 00:26:35.880 { 00:26:35.880 "name": "BaseBdev2", 00:26:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.880 "is_configured": false, 00:26:35.880 "data_offset": 0, 00:26:35.880 "data_size": 0 00:26:35.880 }, 00:26:35.880 { 00:26:35.880 "name": "BaseBdev3", 00:26:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.880 "is_configured": false, 00:26:35.880 "data_offset": 0, 00:26:35.880 "data_size": 0 00:26:35.880 }, 00:26:35.880 { 00:26:35.880 "name": "BaseBdev4", 00:26:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.880 "is_configured": false, 00:26:35.880 "data_offset": 0, 00:26:35.880 "data_size": 0 00:26:35.880 } 00:26:35.880 ] 00:26:35.880 }' 00:26:35.880 09:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:35.880 09:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.138 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:36.397 [2024-07-15 09:53:04.421571] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:36.397 [2024-07-15 09:53:04.421614] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x133049e34500 name Existed_Raid, state configuring 00:26:36.397 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:36.656 [2024-07-15 09:53:04.641599] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:36.656 [2024-07-15 09:53:04.642553] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:36.656 [2024-07-15 09:53:04.642600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:36.656 [2024-07-15 09:53:04.642606] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:36.657 [2024-07-15 09:53:04.642613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:36.657 [2024-07-15 09:53:04.642617] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:36.657 [2024-07-15 09:53:04.642623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.657 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.915 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.915 "name": "Existed_Raid", 00:26:36.915 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:36.915 "strip_size_kb": 0, 00:26:36.915 "state": "configuring", 00:26:36.915 "raid_level": "raid1", 00:26:36.915 "superblock": true, 00:26:36.915 "num_base_bdevs": 4, 00:26:36.915 "num_base_bdevs_discovered": 1, 00:26:36.915 "num_base_bdevs_operational": 4, 00:26:36.915 "base_bdevs_list": [ 00:26:36.915 { 00:26:36.915 "name": "BaseBdev1", 00:26:36.915 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:36.915 "is_configured": true, 00:26:36.915 "data_offset": 2048, 00:26:36.915 "data_size": 63488 00:26:36.915 }, 00:26:36.915 { 00:26:36.915 "name": "BaseBdev2", 00:26:36.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.915 "is_configured": false, 00:26:36.915 "data_offset": 0, 00:26:36.915 "data_size": 0 00:26:36.915 }, 00:26:36.915 { 00:26:36.915 "name": "BaseBdev3", 00:26:36.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.915 "is_configured": false, 00:26:36.915 "data_offset": 0, 00:26:36.915 "data_size": 0 00:26:36.915 }, 00:26:36.915 { 00:26:36.915 "name": "BaseBdev4", 00:26:36.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.915 "is_configured": false, 00:26:36.915 "data_offset": 0, 00:26:36.916 "data_size": 0 00:26:36.916 } 00:26:36.916 ] 00:26:36.916 }' 00:26:36.916 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.916 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.174 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:37.433 [2024-07-15 09:53:05.465851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:37.433 BaseBdev2 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:37.433 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:37.693 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:37.952 [ 00:26:37.952 { 00:26:37.952 "name": "BaseBdev2", 00:26:37.952 "aliases": [ 00:26:37.952 "085aa54d-4290-11ef-a0af-c98d8ee52a94" 00:26:37.952 ], 00:26:37.952 "product_name": "Malloc disk", 00:26:37.952 "block_size": 512, 00:26:37.952 "num_blocks": 65536, 00:26:37.952 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:37.952 "assigned_rate_limits": { 00:26:37.952 "rw_ios_per_sec": 0, 00:26:37.952 "rw_mbytes_per_sec": 0, 00:26:37.952 "r_mbytes_per_sec": 0, 00:26:37.952 "w_mbytes_per_sec": 0 00:26:37.952 }, 00:26:37.952 "claimed": true, 00:26:37.952 "claim_type": "exclusive_write", 00:26:37.952 "zoned": false, 00:26:37.952 "supported_io_types": { 00:26:37.952 "read": true, 00:26:37.952 "write": true, 00:26:37.952 "unmap": true, 00:26:37.952 "flush": true, 00:26:37.952 "reset": true, 00:26:37.952 "nvme_admin": false, 00:26:37.952 "nvme_io": false, 00:26:37.952 "nvme_io_md": false, 00:26:37.952 "write_zeroes": true, 00:26:37.952 "zcopy": true, 00:26:37.952 "get_zone_info": false, 00:26:37.952 "zone_management": false, 00:26:37.952 "zone_append": false, 00:26:37.952 "compare": false, 00:26:37.952 "compare_and_write": false, 00:26:37.952 "abort": true, 00:26:37.952 "seek_hole": false, 00:26:37.952 "seek_data": false, 00:26:37.952 "copy": true, 00:26:37.952 "nvme_iov_md": false 00:26:37.952 }, 00:26:37.952 "memory_domains": [ 00:26:37.952 { 00:26:37.952 "dma_device_id": "system", 00:26:37.952 "dma_device_type": 1 00:26:37.952 }, 00:26:37.952 { 00:26:37.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.952 "dma_device_type": 2 00:26:37.952 } 00:26:37.952 ], 00:26:37.952 "driver_specific": {} 00:26:37.952 } 00:26:37.952 ] 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.952 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:38.212 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.212 "name": "Existed_Raid", 00:26:38.212 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:38.212 "strip_size_kb": 0, 00:26:38.212 "state": "configuring", 00:26:38.212 "raid_level": "raid1", 00:26:38.212 "superblock": true, 00:26:38.212 "num_base_bdevs": 4, 00:26:38.212 "num_base_bdevs_discovered": 2, 00:26:38.212 "num_base_bdevs_operational": 4, 00:26:38.212 "base_bdevs_list": [ 00:26:38.212 { 00:26:38.212 "name": "BaseBdev1", 00:26:38.212 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:38.212 "is_configured": true, 00:26:38.212 "data_offset": 2048, 00:26:38.212 "data_size": 63488 00:26:38.212 }, 00:26:38.212 { 00:26:38.212 "name": "BaseBdev2", 00:26:38.212 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:38.212 "is_configured": true, 00:26:38.212 "data_offset": 2048, 00:26:38.212 "data_size": 63488 00:26:38.212 }, 00:26:38.212 { 00:26:38.212 "name": "BaseBdev3", 00:26:38.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.212 "is_configured": false, 00:26:38.212 "data_offset": 0, 00:26:38.212 "data_size": 0 00:26:38.212 }, 00:26:38.212 { 00:26:38.212 "name": "BaseBdev4", 00:26:38.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.212 "is_configured": false, 00:26:38.212 "data_offset": 0, 00:26:38.212 "data_size": 0 00:26:38.212 } 00:26:38.212 ] 00:26:38.212 }' 00:26:38.212 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.212 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:38.472 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:38.730 [2024-07-15 09:53:06.705914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:38.730 BaseBdev3 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:38.730 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:38.988 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:39.247 [ 00:26:39.247 { 00:26:39.247 "name": "BaseBdev3", 00:26:39.247 "aliases": [ 00:26:39.247 "0917de08-4290-11ef-a0af-c98d8ee52a94" 00:26:39.247 ], 00:26:39.247 "product_name": "Malloc disk", 00:26:39.247 "block_size": 512, 00:26:39.247 "num_blocks": 65536, 00:26:39.247 "uuid": "0917de08-4290-11ef-a0af-c98d8ee52a94", 00:26:39.247 "assigned_rate_limits": { 00:26:39.247 "rw_ios_per_sec": 0, 00:26:39.247 "rw_mbytes_per_sec": 0, 00:26:39.247 "r_mbytes_per_sec": 0, 00:26:39.247 "w_mbytes_per_sec": 0 00:26:39.247 }, 00:26:39.247 "claimed": true, 00:26:39.247 "claim_type": "exclusive_write", 00:26:39.247 "zoned": false, 00:26:39.247 "supported_io_types": { 00:26:39.247 "read": true, 00:26:39.247 "write": true, 00:26:39.247 "unmap": true, 00:26:39.247 "flush": true, 00:26:39.247 "reset": true, 00:26:39.247 "nvme_admin": false, 00:26:39.247 "nvme_io": false, 00:26:39.247 "nvme_io_md": false, 00:26:39.247 "write_zeroes": true, 00:26:39.247 "zcopy": true, 00:26:39.247 "get_zone_info": false, 00:26:39.247 "zone_management": false, 00:26:39.247 "zone_append": false, 00:26:39.247 "compare": false, 00:26:39.247 "compare_and_write": false, 00:26:39.247 "abort": true, 00:26:39.247 "seek_hole": false, 00:26:39.247 "seek_data": false, 00:26:39.247 "copy": true, 00:26:39.247 "nvme_iov_md": false 00:26:39.247 }, 00:26:39.247 "memory_domains": [ 00:26:39.247 { 00:26:39.247 "dma_device_id": "system", 00:26:39.247 "dma_device_type": 1 00:26:39.247 }, 00:26:39.247 { 00:26:39.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.247 "dma_device_type": 2 00:26:39.247 } 00:26:39.247 ], 00:26:39.247 "driver_specific": {} 00:26:39.247 } 00:26:39.247 ] 00:26:39.247 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:39.247 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.248 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:39.507 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.507 "name": "Existed_Raid", 00:26:39.507 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:39.507 "strip_size_kb": 0, 00:26:39.507 "state": "configuring", 00:26:39.507 "raid_level": "raid1", 00:26:39.507 "superblock": true, 00:26:39.507 "num_base_bdevs": 4, 00:26:39.507 "num_base_bdevs_discovered": 3, 00:26:39.507 "num_base_bdevs_operational": 4, 00:26:39.507 "base_bdevs_list": [ 00:26:39.507 { 00:26:39.507 "name": "BaseBdev1", 00:26:39.507 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:39.507 "is_configured": true, 00:26:39.507 "data_offset": 2048, 00:26:39.507 "data_size": 63488 00:26:39.507 }, 00:26:39.507 { 00:26:39.507 "name": "BaseBdev2", 00:26:39.507 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:39.507 "is_configured": true, 00:26:39.507 "data_offset": 2048, 00:26:39.507 "data_size": 63488 00:26:39.507 }, 00:26:39.507 { 00:26:39.507 "name": "BaseBdev3", 00:26:39.507 "uuid": "0917de08-4290-11ef-a0af-c98d8ee52a94", 00:26:39.507 "is_configured": true, 00:26:39.507 "data_offset": 2048, 00:26:39.507 "data_size": 63488 00:26:39.507 }, 00:26:39.507 { 00:26:39.507 "name": "BaseBdev4", 00:26:39.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.507 "is_configured": false, 00:26:39.507 "data_offset": 0, 00:26:39.507 "data_size": 0 00:26:39.507 } 00:26:39.507 ] 00:26:39.507 }' 00:26:39.507 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.507 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:39.767 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:40.027 [2024-07-15 09:53:07.885945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:40.027 [2024-07-15 09:53:07.886023] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x133049e34a00 00:26:40.027 [2024-07-15 09:53:07.886028] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:40.027 [2024-07-15 09:53:07.886046] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x133049e97e20 00:26:40.027 [2024-07-15 09:53:07.886097] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x133049e34a00 00:26:40.027 [2024-07-15 09:53:07.886100] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x133049e34a00 00:26:40.027 [2024-07-15 09:53:07.886122] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.027 BaseBdev4 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:40.027 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:40.027 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:40.287 [ 00:26:40.287 { 00:26:40.287 "name": "BaseBdev4", 00:26:40.287 "aliases": [ 00:26:40.287 "09cbed5d-4290-11ef-a0af-c98d8ee52a94" 00:26:40.287 ], 00:26:40.287 "product_name": "Malloc disk", 00:26:40.287 "block_size": 512, 00:26:40.287 "num_blocks": 65536, 00:26:40.287 "uuid": "09cbed5d-4290-11ef-a0af-c98d8ee52a94", 00:26:40.287 "assigned_rate_limits": { 00:26:40.287 "rw_ios_per_sec": 0, 00:26:40.287 "rw_mbytes_per_sec": 0, 00:26:40.287 "r_mbytes_per_sec": 0, 00:26:40.287 "w_mbytes_per_sec": 0 00:26:40.287 }, 00:26:40.287 "claimed": true, 00:26:40.287 "claim_type": "exclusive_write", 00:26:40.287 "zoned": false, 00:26:40.287 "supported_io_types": { 00:26:40.287 "read": true, 00:26:40.287 "write": true, 00:26:40.287 "unmap": true, 00:26:40.287 "flush": true, 00:26:40.287 "reset": true, 00:26:40.287 "nvme_admin": false, 00:26:40.287 "nvme_io": false, 00:26:40.287 "nvme_io_md": false, 00:26:40.287 "write_zeroes": true, 00:26:40.287 "zcopy": true, 00:26:40.287 "get_zone_info": false, 00:26:40.287 "zone_management": false, 00:26:40.287 "zone_append": false, 00:26:40.287 "compare": false, 00:26:40.287 "compare_and_write": false, 00:26:40.287 "abort": true, 00:26:40.287 "seek_hole": false, 00:26:40.287 "seek_data": false, 00:26:40.287 "copy": true, 00:26:40.287 "nvme_iov_md": false 00:26:40.287 }, 00:26:40.287 "memory_domains": [ 00:26:40.287 { 00:26:40.287 "dma_device_id": "system", 00:26:40.287 "dma_device_type": 1 00:26:40.287 }, 00:26:40.287 { 00:26:40.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.287 "dma_device_type": 2 00:26:40.287 } 00:26:40.287 ], 00:26:40.287 "driver_specific": {} 00:26:40.287 } 00:26:40.287 ] 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.287 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.546 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.546 "name": "Existed_Raid", 00:26:40.546 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:40.546 "strip_size_kb": 0, 00:26:40.546 "state": "online", 00:26:40.546 "raid_level": "raid1", 00:26:40.546 "superblock": true, 00:26:40.546 "num_base_bdevs": 4, 00:26:40.546 "num_base_bdevs_discovered": 4, 00:26:40.546 "num_base_bdevs_operational": 4, 00:26:40.546 "base_bdevs_list": [ 00:26:40.546 { 00:26:40.546 "name": "BaseBdev1", 00:26:40.546 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:40.546 "is_configured": true, 00:26:40.546 "data_offset": 2048, 00:26:40.546 "data_size": 63488 00:26:40.546 }, 00:26:40.546 { 00:26:40.546 "name": "BaseBdev2", 00:26:40.546 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:40.546 "is_configured": true, 00:26:40.546 "data_offset": 2048, 00:26:40.546 "data_size": 63488 00:26:40.546 }, 00:26:40.546 { 00:26:40.546 "name": "BaseBdev3", 00:26:40.546 "uuid": "0917de08-4290-11ef-a0af-c98d8ee52a94", 00:26:40.546 "is_configured": true, 00:26:40.546 "data_offset": 2048, 00:26:40.546 "data_size": 63488 00:26:40.546 }, 00:26:40.546 { 00:26:40.546 "name": "BaseBdev4", 00:26:40.546 "uuid": "09cbed5d-4290-11ef-a0af-c98d8ee52a94", 00:26:40.546 "is_configured": true, 00:26:40.546 "data_offset": 2048, 00:26:40.546 "data_size": 63488 00:26:40.546 } 00:26:40.546 ] 00:26:40.546 }' 00:26:40.546 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.546 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:40.805 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:41.064 [2024-07-15 09:53:09.057894] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.064 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:41.064 "name": "Existed_Raid", 00:26:41.064 "aliases": [ 00:26:41.064 "07dce619-4290-11ef-a0af-c98d8ee52a94" 00:26:41.064 ], 00:26:41.064 "product_name": "Raid Volume", 00:26:41.064 "block_size": 512, 00:26:41.064 "num_blocks": 63488, 00:26:41.064 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:41.064 "assigned_rate_limits": { 00:26:41.064 "rw_ios_per_sec": 0, 00:26:41.064 "rw_mbytes_per_sec": 0, 00:26:41.064 "r_mbytes_per_sec": 0, 00:26:41.064 "w_mbytes_per_sec": 0 00:26:41.064 }, 00:26:41.064 "claimed": false, 00:26:41.064 "zoned": false, 00:26:41.064 "supported_io_types": { 00:26:41.064 "read": true, 00:26:41.064 "write": true, 00:26:41.064 "unmap": false, 00:26:41.064 "flush": false, 00:26:41.064 "reset": true, 00:26:41.064 "nvme_admin": false, 00:26:41.064 "nvme_io": false, 00:26:41.064 "nvme_io_md": false, 00:26:41.064 "write_zeroes": true, 00:26:41.064 "zcopy": false, 00:26:41.064 "get_zone_info": false, 00:26:41.064 "zone_management": false, 00:26:41.064 "zone_append": false, 00:26:41.064 "compare": false, 00:26:41.064 "compare_and_write": false, 00:26:41.064 "abort": false, 00:26:41.064 "seek_hole": false, 00:26:41.064 "seek_data": false, 00:26:41.064 "copy": false, 00:26:41.064 "nvme_iov_md": false 00:26:41.064 }, 00:26:41.064 "memory_domains": [ 00:26:41.064 { 00:26:41.064 "dma_device_id": "system", 00:26:41.064 "dma_device_type": 1 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.064 "dma_device_type": 2 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "system", 00:26:41.064 "dma_device_type": 1 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.064 "dma_device_type": 2 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "system", 00:26:41.064 "dma_device_type": 1 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.064 "dma_device_type": 2 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "system", 00:26:41.064 "dma_device_type": 1 00:26:41.064 }, 00:26:41.064 { 00:26:41.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.064 "dma_device_type": 2 00:26:41.064 } 00:26:41.064 ], 00:26:41.064 "driver_specific": { 00:26:41.064 "raid": { 00:26:41.064 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:41.064 "strip_size_kb": 0, 00:26:41.064 "state": "online", 00:26:41.064 "raid_level": "raid1", 00:26:41.064 "superblock": true, 00:26:41.064 "num_base_bdevs": 4, 00:26:41.064 "num_base_bdevs_discovered": 4, 00:26:41.064 "num_base_bdevs_operational": 4, 00:26:41.064 "base_bdevs_list": [ 00:26:41.064 { 00:26:41.064 "name": "BaseBdev1", 00:26:41.065 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:41.065 "is_configured": true, 00:26:41.065 "data_offset": 2048, 00:26:41.065 "data_size": 63488 00:26:41.065 }, 00:26:41.065 { 00:26:41.065 "name": "BaseBdev2", 00:26:41.065 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:41.065 "is_configured": true, 00:26:41.065 "data_offset": 2048, 00:26:41.065 "data_size": 63488 00:26:41.065 }, 00:26:41.065 { 00:26:41.065 "name": "BaseBdev3", 00:26:41.065 "uuid": "0917de08-4290-11ef-a0af-c98d8ee52a94", 00:26:41.065 "is_configured": true, 00:26:41.065 "data_offset": 2048, 00:26:41.065 "data_size": 63488 00:26:41.065 }, 00:26:41.065 { 00:26:41.065 "name": "BaseBdev4", 00:26:41.065 "uuid": "09cbed5d-4290-11ef-a0af-c98d8ee52a94", 00:26:41.065 "is_configured": true, 00:26:41.065 "data_offset": 2048, 00:26:41.065 "data_size": 63488 00:26:41.065 } 00:26:41.065 ] 00:26:41.065 } 00:26:41.065 } 00:26:41.065 }' 00:26:41.065 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:41.065 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:41.065 BaseBdev2 00:26:41.065 BaseBdev3 00:26:41.065 BaseBdev4' 00:26:41.065 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.065 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:41.065 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.323 "name": "BaseBdev1", 00:26:41.323 "aliases": [ 00:26:41.323 "06f5909d-4290-11ef-a0af-c98d8ee52a94" 00:26:41.323 ], 00:26:41.323 "product_name": "Malloc disk", 00:26:41.323 "block_size": 512, 00:26:41.323 "num_blocks": 65536, 00:26:41.323 "uuid": "06f5909d-4290-11ef-a0af-c98d8ee52a94", 00:26:41.323 "assigned_rate_limits": { 00:26:41.323 "rw_ios_per_sec": 0, 00:26:41.323 "rw_mbytes_per_sec": 0, 00:26:41.323 "r_mbytes_per_sec": 0, 00:26:41.323 "w_mbytes_per_sec": 0 00:26:41.323 }, 00:26:41.323 "claimed": true, 00:26:41.323 "claim_type": "exclusive_write", 00:26:41.323 "zoned": false, 00:26:41.323 "supported_io_types": { 00:26:41.323 "read": true, 00:26:41.323 "write": true, 00:26:41.323 "unmap": true, 00:26:41.323 "flush": true, 00:26:41.323 "reset": true, 00:26:41.323 "nvme_admin": false, 00:26:41.323 "nvme_io": false, 00:26:41.323 "nvme_io_md": false, 00:26:41.323 "write_zeroes": true, 00:26:41.323 "zcopy": true, 00:26:41.323 "get_zone_info": false, 00:26:41.323 "zone_management": false, 00:26:41.323 "zone_append": false, 00:26:41.323 "compare": false, 00:26:41.323 "compare_and_write": false, 00:26:41.323 "abort": true, 00:26:41.323 "seek_hole": false, 00:26:41.323 "seek_data": false, 00:26:41.323 "copy": true, 00:26:41.323 "nvme_iov_md": false 00:26:41.323 }, 00:26:41.323 "memory_domains": [ 00:26:41.323 { 00:26:41.323 "dma_device_id": "system", 00:26:41.323 "dma_device_type": 1 00:26:41.323 }, 00:26:41.323 { 00:26:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.323 "dma_device_type": 2 00:26:41.323 } 00:26:41.323 ], 00:26:41.323 "driver_specific": {} 00:26:41.323 }' 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:41.323 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.581 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.581 "name": "BaseBdev2", 00:26:41.581 "aliases": [ 00:26:41.581 "085aa54d-4290-11ef-a0af-c98d8ee52a94" 00:26:41.581 ], 00:26:41.581 "product_name": "Malloc disk", 00:26:41.581 "block_size": 512, 00:26:41.581 "num_blocks": 65536, 00:26:41.582 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:41.582 "assigned_rate_limits": { 00:26:41.582 "rw_ios_per_sec": 0, 00:26:41.582 "rw_mbytes_per_sec": 0, 00:26:41.582 "r_mbytes_per_sec": 0, 00:26:41.582 "w_mbytes_per_sec": 0 00:26:41.582 }, 00:26:41.582 "claimed": true, 00:26:41.582 "claim_type": "exclusive_write", 00:26:41.582 "zoned": false, 00:26:41.582 "supported_io_types": { 00:26:41.582 "read": true, 00:26:41.582 "write": true, 00:26:41.582 "unmap": true, 00:26:41.582 "flush": true, 00:26:41.582 "reset": true, 00:26:41.582 "nvme_admin": false, 00:26:41.582 "nvme_io": false, 00:26:41.582 "nvme_io_md": false, 00:26:41.582 "write_zeroes": true, 00:26:41.582 "zcopy": true, 00:26:41.582 "get_zone_info": false, 00:26:41.582 "zone_management": false, 00:26:41.582 "zone_append": false, 00:26:41.582 "compare": false, 00:26:41.582 "compare_and_write": false, 00:26:41.582 "abort": true, 00:26:41.582 "seek_hole": false, 00:26:41.582 "seek_data": false, 00:26:41.582 "copy": true, 00:26:41.582 "nvme_iov_md": false 00:26:41.582 }, 00:26:41.582 "memory_domains": [ 00:26:41.582 { 00:26:41.582 "dma_device_id": "system", 00:26:41.582 "dma_device_type": 1 00:26:41.582 }, 00:26:41.582 { 00:26:41.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.582 "dma_device_type": 2 00:26:41.582 } 00:26:41.582 ], 00:26:41.582 "driver_specific": {} 00:26:41.582 }' 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:41.582 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.841 "name": "BaseBdev3", 00:26:41.841 "aliases": [ 00:26:41.841 "0917de08-4290-11ef-a0af-c98d8ee52a94" 00:26:41.841 ], 00:26:41.841 "product_name": "Malloc disk", 00:26:41.841 "block_size": 512, 00:26:41.841 "num_blocks": 65536, 00:26:41.841 "uuid": "0917de08-4290-11ef-a0af-c98d8ee52a94", 00:26:41.841 "assigned_rate_limits": { 00:26:41.841 "rw_ios_per_sec": 0, 00:26:41.841 "rw_mbytes_per_sec": 0, 00:26:41.841 "r_mbytes_per_sec": 0, 00:26:41.841 "w_mbytes_per_sec": 0 00:26:41.841 }, 00:26:41.841 "claimed": true, 00:26:41.841 "claim_type": "exclusive_write", 00:26:41.841 "zoned": false, 00:26:41.841 "supported_io_types": { 00:26:41.841 "read": true, 00:26:41.841 "write": true, 00:26:41.841 "unmap": true, 00:26:41.841 "flush": true, 00:26:41.841 "reset": true, 00:26:41.841 "nvme_admin": false, 00:26:41.841 "nvme_io": false, 00:26:41.841 "nvme_io_md": false, 00:26:41.841 "write_zeroes": true, 00:26:41.841 "zcopy": true, 00:26:41.841 "get_zone_info": false, 00:26:41.841 "zone_management": false, 00:26:41.841 "zone_append": false, 00:26:41.841 "compare": false, 00:26:41.841 "compare_and_write": false, 00:26:41.841 "abort": true, 00:26:41.841 "seek_hole": false, 00:26:41.841 "seek_data": false, 00:26:41.841 "copy": true, 00:26:41.841 "nvme_iov_md": false 00:26:41.841 }, 00:26:41.841 "memory_domains": [ 00:26:41.841 { 00:26:41.841 "dma_device_id": "system", 00:26:41.841 "dma_device_type": 1 00:26:41.841 }, 00:26:41.841 { 00:26:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.841 "dma_device_type": 2 00:26:41.841 } 00:26:41.841 ], 00:26:41.841 "driver_specific": {} 00:26:41.841 }' 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.841 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:42.100 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:42.358 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:42.358 "name": "BaseBdev4", 00:26:42.358 "aliases": [ 00:26:42.358 "09cbed5d-4290-11ef-a0af-c98d8ee52a94" 00:26:42.358 ], 00:26:42.358 "product_name": "Malloc disk", 00:26:42.358 "block_size": 512, 00:26:42.358 "num_blocks": 65536, 00:26:42.358 "uuid": "09cbed5d-4290-11ef-a0af-c98d8ee52a94", 00:26:42.358 "assigned_rate_limits": { 00:26:42.358 "rw_ios_per_sec": 0, 00:26:42.358 "rw_mbytes_per_sec": 0, 00:26:42.358 "r_mbytes_per_sec": 0, 00:26:42.358 "w_mbytes_per_sec": 0 00:26:42.359 }, 00:26:42.359 "claimed": true, 00:26:42.359 "claim_type": "exclusive_write", 00:26:42.359 "zoned": false, 00:26:42.359 "supported_io_types": { 00:26:42.359 "read": true, 00:26:42.359 "write": true, 00:26:42.359 "unmap": true, 00:26:42.359 "flush": true, 00:26:42.359 "reset": true, 00:26:42.359 "nvme_admin": false, 00:26:42.359 "nvme_io": false, 00:26:42.359 "nvme_io_md": false, 00:26:42.359 "write_zeroes": true, 00:26:42.359 "zcopy": true, 00:26:42.359 "get_zone_info": false, 00:26:42.359 "zone_management": false, 00:26:42.359 "zone_append": false, 00:26:42.359 "compare": false, 00:26:42.359 "compare_and_write": false, 00:26:42.359 "abort": true, 00:26:42.359 "seek_hole": false, 00:26:42.359 "seek_data": false, 00:26:42.359 "copy": true, 00:26:42.359 "nvme_iov_md": false 00:26:42.359 }, 00:26:42.359 "memory_domains": [ 00:26:42.359 { 00:26:42.359 "dma_device_id": "system", 00:26:42.359 "dma_device_type": 1 00:26:42.359 }, 00:26:42.359 { 00:26:42.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.359 "dma_device_type": 2 00:26:42.359 } 00:26:42.359 ], 00:26:42.359 "driver_specific": {} 00:26:42.359 }' 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.359 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:42.617 [2024-07-15 09:53:10.513932] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.617 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.877 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.877 "name": "Existed_Raid", 00:26:42.877 "uuid": "07dce619-4290-11ef-a0af-c98d8ee52a94", 00:26:42.877 "strip_size_kb": 0, 00:26:42.877 "state": "online", 00:26:42.877 "raid_level": "raid1", 00:26:42.877 "superblock": true, 00:26:42.877 "num_base_bdevs": 4, 00:26:42.877 "num_base_bdevs_discovered": 3, 00:26:42.877 "num_base_bdevs_operational": 3, 00:26:42.877 "base_bdevs_list": [ 00:26:42.877 { 00:26:42.877 "name": null, 00:26:42.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.877 "is_configured": false, 00:26:42.877 "data_offset": 2048, 00:26:42.877 "data_size": 63488 00:26:42.877 }, 00:26:42.877 { 00:26:42.877 "name": "BaseBdev2", 00:26:42.877 "uuid": "085aa54d-4290-11ef-a0af-c98d8ee52a94", 00:26:42.877 "is_configured": true, 00:26:42.877 "data_offset": 2048, 00:26:42.877 "data_size": 63488 00:26:42.877 }, 00:26:42.877 { 00:26:42.877 "name": "BaseBdev3", 00:26:42.877 "uuid": "0917de08-4290-11ef-a0af-c98d8ee52a94", 00:26:42.877 "is_configured": true, 00:26:42.877 "data_offset": 2048, 00:26:42.877 "data_size": 63488 00:26:42.877 }, 00:26:42.877 { 00:26:42.877 "name": "BaseBdev4", 00:26:42.877 "uuid": "09cbed5d-4290-11ef-a0af-c98d8ee52a94", 00:26:42.877 "is_configured": true, 00:26:42.877 "data_offset": 2048, 00:26:42.877 "data_size": 63488 00:26:42.877 } 00:26:42.877 ] 00:26:42.877 }' 00:26:42.877 09:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.877 09:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.136 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:43.136 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:43.136 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.136 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:43.394 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:43.394 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:43.394 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:43.394 [2024-07-15 09:53:11.482882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:43.652 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:43.911 [2024-07-15 09:53:11.923650] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:43.911 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:43.911 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:43.911 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.911 09:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:44.170 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:44.170 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:44.170 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:44.429 [2024-07-15 09:53:12.372494] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:44.429 [2024-07-15 09:53:12.372527] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:44.429 [2024-07-15 09:53:12.381454] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:44.429 [2024-07-15 09:53:12.381474] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:44.429 [2024-07-15 09:53:12.381478] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x133049e34a00 name Existed_Raid, state offline 00:26:44.429 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:44.429 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:44.429 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:44.429 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.689 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:44.689 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:44.689 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:44.689 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:44.689 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:44.689 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:44.948 BaseBdev2 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:44.948 09:53:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:45.208 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:45.208 [ 00:26:45.208 { 00:26:45.208 "name": "BaseBdev2", 00:26:45.208 "aliases": [ 00:26:45.208 "0cb9fe73-4290-11ef-a0af-c98d8ee52a94" 00:26:45.208 ], 00:26:45.208 "product_name": "Malloc disk", 00:26:45.208 "block_size": 512, 00:26:45.208 "num_blocks": 65536, 00:26:45.208 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:45.208 "assigned_rate_limits": { 00:26:45.208 "rw_ios_per_sec": 0, 00:26:45.208 "rw_mbytes_per_sec": 0, 00:26:45.208 "r_mbytes_per_sec": 0, 00:26:45.208 "w_mbytes_per_sec": 0 00:26:45.208 }, 00:26:45.208 "claimed": false, 00:26:45.208 "zoned": false, 00:26:45.208 "supported_io_types": { 00:26:45.208 "read": true, 00:26:45.208 "write": true, 00:26:45.208 "unmap": true, 00:26:45.208 "flush": true, 00:26:45.208 "reset": true, 00:26:45.208 "nvme_admin": false, 00:26:45.208 "nvme_io": false, 00:26:45.208 "nvme_io_md": false, 00:26:45.208 "write_zeroes": true, 00:26:45.208 "zcopy": true, 00:26:45.208 "get_zone_info": false, 00:26:45.208 "zone_management": false, 00:26:45.208 "zone_append": false, 00:26:45.208 "compare": false, 00:26:45.208 "compare_and_write": false, 00:26:45.208 "abort": true, 00:26:45.208 "seek_hole": false, 00:26:45.208 "seek_data": false, 00:26:45.208 "copy": true, 00:26:45.208 "nvme_iov_md": false 00:26:45.208 }, 00:26:45.208 "memory_domains": [ 00:26:45.208 { 00:26:45.208 "dma_device_id": "system", 00:26:45.208 "dma_device_type": 1 00:26:45.208 }, 00:26:45.208 { 00:26:45.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.208 "dma_device_type": 2 00:26:45.208 } 00:26:45.208 ], 00:26:45.208 "driver_specific": {} 00:26:45.208 } 00:26:45.208 ] 00:26:45.208 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:45.208 09:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:45.208 09:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:45.208 09:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:45.467 BaseBdev3 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:45.467 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:45.726 09:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:45.985 [ 00:26:45.985 { 00:26:45.985 "name": "BaseBdev3", 00:26:45.985 "aliases": [ 00:26:45.985 "0d1f50b3-4290-11ef-a0af-c98d8ee52a94" 00:26:45.985 ], 00:26:45.985 "product_name": "Malloc disk", 00:26:45.985 "block_size": 512, 00:26:45.985 "num_blocks": 65536, 00:26:45.985 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:45.985 "assigned_rate_limits": { 00:26:45.985 "rw_ios_per_sec": 0, 00:26:45.985 "rw_mbytes_per_sec": 0, 00:26:45.985 "r_mbytes_per_sec": 0, 00:26:45.985 "w_mbytes_per_sec": 0 00:26:45.985 }, 00:26:45.985 "claimed": false, 00:26:45.985 "zoned": false, 00:26:45.985 "supported_io_types": { 00:26:45.985 "read": true, 00:26:45.985 "write": true, 00:26:45.985 "unmap": true, 00:26:45.985 "flush": true, 00:26:45.985 "reset": true, 00:26:45.985 "nvme_admin": false, 00:26:45.985 "nvme_io": false, 00:26:45.985 "nvme_io_md": false, 00:26:45.985 "write_zeroes": true, 00:26:45.985 "zcopy": true, 00:26:45.985 "get_zone_info": false, 00:26:45.985 "zone_management": false, 00:26:45.985 "zone_append": false, 00:26:45.985 "compare": false, 00:26:45.985 "compare_and_write": false, 00:26:45.985 "abort": true, 00:26:45.985 "seek_hole": false, 00:26:45.985 "seek_data": false, 00:26:45.985 "copy": true, 00:26:45.985 "nvme_iov_md": false 00:26:45.985 }, 00:26:45.985 "memory_domains": [ 00:26:45.985 { 00:26:45.985 "dma_device_id": "system", 00:26:45.985 "dma_device_type": 1 00:26:45.985 }, 00:26:45.985 { 00:26:45.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.985 "dma_device_type": 2 00:26:45.985 } 00:26:45.985 ], 00:26:45.985 "driver_specific": {} 00:26:45.985 } 00:26:45.985 ] 00:26:45.985 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:45.985 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:45.985 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:45.985 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:46.244 BaseBdev4 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:46.244 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:46.504 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:46.762 [ 00:26:46.762 { 00:26:46.762 "name": "BaseBdev4", 00:26:46.762 "aliases": [ 00:26:46.762 "0d8fa09b-4290-11ef-a0af-c98d8ee52a94" 00:26:46.762 ], 00:26:46.762 "product_name": "Malloc disk", 00:26:46.762 "block_size": 512, 00:26:46.762 "num_blocks": 65536, 00:26:46.762 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:46.762 "assigned_rate_limits": { 00:26:46.762 "rw_ios_per_sec": 0, 00:26:46.762 "rw_mbytes_per_sec": 0, 00:26:46.762 "r_mbytes_per_sec": 0, 00:26:46.762 "w_mbytes_per_sec": 0 00:26:46.762 }, 00:26:46.762 "claimed": false, 00:26:46.762 "zoned": false, 00:26:46.762 "supported_io_types": { 00:26:46.762 "read": true, 00:26:46.762 "write": true, 00:26:46.762 "unmap": true, 00:26:46.762 "flush": true, 00:26:46.762 "reset": true, 00:26:46.762 "nvme_admin": false, 00:26:46.762 "nvme_io": false, 00:26:46.762 "nvme_io_md": false, 00:26:46.762 "write_zeroes": true, 00:26:46.762 "zcopy": true, 00:26:46.762 "get_zone_info": false, 00:26:46.762 "zone_management": false, 00:26:46.762 "zone_append": false, 00:26:46.762 "compare": false, 00:26:46.762 "compare_and_write": false, 00:26:46.762 "abort": true, 00:26:46.762 "seek_hole": false, 00:26:46.762 "seek_data": false, 00:26:46.762 "copy": true, 00:26:46.762 "nvme_iov_md": false 00:26:46.762 }, 00:26:46.762 "memory_domains": [ 00:26:46.762 { 00:26:46.762 "dma_device_id": "system", 00:26:46.762 "dma_device_type": 1 00:26:46.762 }, 00:26:46.762 { 00:26:46.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.762 "dma_device_type": 2 00:26:46.762 } 00:26:46.762 ], 00:26:46.762 "driver_specific": {} 00:26:46.762 } 00:26:46.762 ] 00:26:46.762 09:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:46.762 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:46.762 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:46.762 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:47.019 [2024-07-15 09:53:14.865567] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:47.019 [2024-07-15 09:53:14.865633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:47.019 [2024-07-15 09:53:14.865641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:47.019 [2024-07-15 09:53:14.866314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:47.019 [2024-07-15 09:53:14.866334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.019 09:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.019 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.019 "name": "Existed_Raid", 00:26:47.019 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:47.019 "strip_size_kb": 0, 00:26:47.019 "state": "configuring", 00:26:47.019 "raid_level": "raid1", 00:26:47.019 "superblock": true, 00:26:47.019 "num_base_bdevs": 4, 00:26:47.019 "num_base_bdevs_discovered": 3, 00:26:47.019 "num_base_bdevs_operational": 4, 00:26:47.019 "base_bdevs_list": [ 00:26:47.019 { 00:26:47.019 "name": "BaseBdev1", 00:26:47.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.019 "is_configured": false, 00:26:47.019 "data_offset": 0, 00:26:47.019 "data_size": 0 00:26:47.019 }, 00:26:47.019 { 00:26:47.019 "name": "BaseBdev2", 00:26:47.019 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:47.019 "is_configured": true, 00:26:47.019 "data_offset": 2048, 00:26:47.019 "data_size": 63488 00:26:47.019 }, 00:26:47.019 { 00:26:47.019 "name": "BaseBdev3", 00:26:47.019 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:47.019 "is_configured": true, 00:26:47.019 "data_offset": 2048, 00:26:47.019 "data_size": 63488 00:26:47.019 }, 00:26:47.019 { 00:26:47.019 "name": "BaseBdev4", 00:26:47.019 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:47.019 "is_configured": true, 00:26:47.019 "data_offset": 2048, 00:26:47.019 "data_size": 63488 00:26:47.019 } 00:26:47.019 ] 00:26:47.019 }' 00:26:47.019 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.019 09:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:47.585 [2024-07-15 09:53:15.629599] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.585 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.844 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.844 "name": "Existed_Raid", 00:26:47.844 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:47.844 "strip_size_kb": 0, 00:26:47.844 "state": "configuring", 00:26:47.844 "raid_level": "raid1", 00:26:47.844 "superblock": true, 00:26:47.844 "num_base_bdevs": 4, 00:26:47.844 "num_base_bdevs_discovered": 2, 00:26:47.844 "num_base_bdevs_operational": 4, 00:26:47.844 "base_bdevs_list": [ 00:26:47.844 { 00:26:47.844 "name": "BaseBdev1", 00:26:47.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.844 "is_configured": false, 00:26:47.844 "data_offset": 0, 00:26:47.844 "data_size": 0 00:26:47.844 }, 00:26:47.844 { 00:26:47.844 "name": null, 00:26:47.844 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:47.844 "is_configured": false, 00:26:47.844 "data_offset": 2048, 00:26:47.844 "data_size": 63488 00:26:47.844 }, 00:26:47.844 { 00:26:47.844 "name": "BaseBdev3", 00:26:47.844 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:47.844 "is_configured": true, 00:26:47.844 "data_offset": 2048, 00:26:47.844 "data_size": 63488 00:26:47.844 }, 00:26:47.844 { 00:26:47.844 "name": "BaseBdev4", 00:26:47.844 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:47.844 "is_configured": true, 00:26:47.844 "data_offset": 2048, 00:26:47.844 "data_size": 63488 00:26:47.844 } 00:26:47.844 ] 00:26:47.844 }' 00:26:47.844 09:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.844 09:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.102 09:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.361 09:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:48.361 09:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:48.361 09:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:48.620 [2024-07-15 09:53:16.633792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:48.620 BaseBdev1 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:48.620 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:48.879 09:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:49.137 [ 00:26:49.137 { 00:26:49.137 "name": "BaseBdev1", 00:26:49.137 "aliases": [ 00:26:49.137 "0f02bf12-4290-11ef-a0af-c98d8ee52a94" 00:26:49.137 ], 00:26:49.137 "product_name": "Malloc disk", 00:26:49.137 "block_size": 512, 00:26:49.137 "num_blocks": 65536, 00:26:49.137 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:49.137 "assigned_rate_limits": { 00:26:49.137 "rw_ios_per_sec": 0, 00:26:49.137 "rw_mbytes_per_sec": 0, 00:26:49.137 "r_mbytes_per_sec": 0, 00:26:49.137 "w_mbytes_per_sec": 0 00:26:49.137 }, 00:26:49.137 "claimed": true, 00:26:49.137 "claim_type": "exclusive_write", 00:26:49.137 "zoned": false, 00:26:49.137 "supported_io_types": { 00:26:49.137 "read": true, 00:26:49.137 "write": true, 00:26:49.137 "unmap": true, 00:26:49.137 "flush": true, 00:26:49.137 "reset": true, 00:26:49.137 "nvme_admin": false, 00:26:49.137 "nvme_io": false, 00:26:49.137 "nvme_io_md": false, 00:26:49.137 "write_zeroes": true, 00:26:49.137 "zcopy": true, 00:26:49.137 "get_zone_info": false, 00:26:49.137 "zone_management": false, 00:26:49.137 "zone_append": false, 00:26:49.137 "compare": false, 00:26:49.137 "compare_and_write": false, 00:26:49.137 "abort": true, 00:26:49.137 "seek_hole": false, 00:26:49.137 "seek_data": false, 00:26:49.137 "copy": true, 00:26:49.137 "nvme_iov_md": false 00:26:49.137 }, 00:26:49.137 "memory_domains": [ 00:26:49.137 { 00:26:49.137 "dma_device_id": "system", 00:26:49.137 "dma_device_type": 1 00:26:49.138 }, 00:26:49.138 { 00:26:49.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.138 "dma_device_type": 2 00:26:49.138 } 00:26:49.138 ], 00:26:49.138 "driver_specific": {} 00:26:49.138 } 00:26:49.138 ] 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.138 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.396 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.396 "name": "Existed_Raid", 00:26:49.396 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:49.396 "strip_size_kb": 0, 00:26:49.396 "state": "configuring", 00:26:49.396 "raid_level": "raid1", 00:26:49.396 "superblock": true, 00:26:49.396 "num_base_bdevs": 4, 00:26:49.396 "num_base_bdevs_discovered": 3, 00:26:49.396 "num_base_bdevs_operational": 4, 00:26:49.396 "base_bdevs_list": [ 00:26:49.396 { 00:26:49.396 "name": "BaseBdev1", 00:26:49.396 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:49.396 "is_configured": true, 00:26:49.396 "data_offset": 2048, 00:26:49.396 "data_size": 63488 00:26:49.396 }, 00:26:49.396 { 00:26:49.396 "name": null, 00:26:49.396 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:49.396 "is_configured": false, 00:26:49.396 "data_offset": 2048, 00:26:49.396 "data_size": 63488 00:26:49.396 }, 00:26:49.396 { 00:26:49.396 "name": "BaseBdev3", 00:26:49.396 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:49.396 "is_configured": true, 00:26:49.396 "data_offset": 2048, 00:26:49.396 "data_size": 63488 00:26:49.396 }, 00:26:49.396 { 00:26:49.396 "name": "BaseBdev4", 00:26:49.396 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:49.396 "is_configured": true, 00:26:49.396 "data_offset": 2048, 00:26:49.396 "data_size": 63488 00:26:49.396 } 00:26:49.396 ] 00:26:49.396 }' 00:26:49.396 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.396 09:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.655 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.655 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:49.914 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:49.914 09:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:50.172 [2024-07-15 09:53:18.057763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.172 "name": "Existed_Raid", 00:26:50.172 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:50.172 "strip_size_kb": 0, 00:26:50.172 "state": "configuring", 00:26:50.172 "raid_level": "raid1", 00:26:50.172 "superblock": true, 00:26:50.172 "num_base_bdevs": 4, 00:26:50.172 "num_base_bdevs_discovered": 2, 00:26:50.172 "num_base_bdevs_operational": 4, 00:26:50.172 "base_bdevs_list": [ 00:26:50.172 { 00:26:50.172 "name": "BaseBdev1", 00:26:50.172 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:50.172 "is_configured": true, 00:26:50.172 "data_offset": 2048, 00:26:50.172 "data_size": 63488 00:26:50.172 }, 00:26:50.172 { 00:26:50.172 "name": null, 00:26:50.172 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:50.172 "is_configured": false, 00:26:50.172 "data_offset": 2048, 00:26:50.172 "data_size": 63488 00:26:50.172 }, 00:26:50.172 { 00:26:50.172 "name": null, 00:26:50.172 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:50.172 "is_configured": false, 00:26:50.172 "data_offset": 2048, 00:26:50.172 "data_size": 63488 00:26:50.172 }, 00:26:50.172 { 00:26:50.172 "name": "BaseBdev4", 00:26:50.172 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:50.172 "is_configured": true, 00:26:50.172 "data_offset": 2048, 00:26:50.172 "data_size": 63488 00:26:50.172 } 00:26:50.172 ] 00:26:50.172 }' 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.172 09:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.739 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:50.739 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.739 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:50.739 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:51.006 [2024-07-15 09:53:18.941814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.006 09:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.312 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.312 "name": "Existed_Raid", 00:26:51.312 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:51.312 "strip_size_kb": 0, 00:26:51.312 "state": "configuring", 00:26:51.312 "raid_level": "raid1", 00:26:51.312 "superblock": true, 00:26:51.312 "num_base_bdevs": 4, 00:26:51.312 "num_base_bdevs_discovered": 3, 00:26:51.312 "num_base_bdevs_operational": 4, 00:26:51.312 "base_bdevs_list": [ 00:26:51.312 { 00:26:51.312 "name": "BaseBdev1", 00:26:51.312 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:51.312 "is_configured": true, 00:26:51.312 "data_offset": 2048, 00:26:51.312 "data_size": 63488 00:26:51.312 }, 00:26:51.312 { 00:26:51.312 "name": null, 00:26:51.312 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:51.312 "is_configured": false, 00:26:51.312 "data_offset": 2048, 00:26:51.312 "data_size": 63488 00:26:51.312 }, 00:26:51.312 { 00:26:51.312 "name": "BaseBdev3", 00:26:51.312 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:51.312 "is_configured": true, 00:26:51.312 "data_offset": 2048, 00:26:51.312 "data_size": 63488 00:26:51.312 }, 00:26:51.312 { 00:26:51.312 "name": "BaseBdev4", 00:26:51.312 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:51.312 "is_configured": true, 00:26:51.312 "data_offset": 2048, 00:26:51.312 "data_size": 63488 00:26:51.312 } 00:26:51.312 ] 00:26:51.312 }' 00:26:51.312 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.312 09:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.570 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.570 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:51.571 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:51.571 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:51.830 [2024-07-15 09:53:19.825865] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.830 09:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.088 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.088 "name": "Existed_Raid", 00:26:52.088 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:52.088 "strip_size_kb": 0, 00:26:52.088 "state": "configuring", 00:26:52.088 "raid_level": "raid1", 00:26:52.088 "superblock": true, 00:26:52.088 "num_base_bdevs": 4, 00:26:52.088 "num_base_bdevs_discovered": 2, 00:26:52.088 "num_base_bdevs_operational": 4, 00:26:52.088 "base_bdevs_list": [ 00:26:52.088 { 00:26:52.088 "name": null, 00:26:52.088 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:52.088 "is_configured": false, 00:26:52.088 "data_offset": 2048, 00:26:52.088 "data_size": 63488 00:26:52.088 }, 00:26:52.088 { 00:26:52.088 "name": null, 00:26:52.088 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:52.088 "is_configured": false, 00:26:52.088 "data_offset": 2048, 00:26:52.088 "data_size": 63488 00:26:52.088 }, 00:26:52.088 { 00:26:52.088 "name": "BaseBdev3", 00:26:52.088 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:52.088 "is_configured": true, 00:26:52.088 "data_offset": 2048, 00:26:52.088 "data_size": 63488 00:26:52.088 }, 00:26:52.088 { 00:26:52.088 "name": "BaseBdev4", 00:26:52.088 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:52.088 "is_configured": true, 00:26:52.088 "data_offset": 2048, 00:26:52.088 "data_size": 63488 00:26:52.088 } 00:26:52.088 ] 00:26:52.088 }' 00:26:52.088 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.088 09:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.346 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.346 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:52.605 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:52.605 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:52.866 [2024-07-15 09:53:20.726235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.866 "name": "Existed_Raid", 00:26:52.866 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:52.866 "strip_size_kb": 0, 00:26:52.866 "state": "configuring", 00:26:52.866 "raid_level": "raid1", 00:26:52.866 "superblock": true, 00:26:52.866 "num_base_bdevs": 4, 00:26:52.866 "num_base_bdevs_discovered": 3, 00:26:52.866 "num_base_bdevs_operational": 4, 00:26:52.866 "base_bdevs_list": [ 00:26:52.866 { 00:26:52.866 "name": null, 00:26:52.866 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:52.866 "is_configured": false, 00:26:52.866 "data_offset": 2048, 00:26:52.866 "data_size": 63488 00:26:52.866 }, 00:26:52.866 { 00:26:52.866 "name": "BaseBdev2", 00:26:52.866 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:52.866 "is_configured": true, 00:26:52.866 "data_offset": 2048, 00:26:52.866 "data_size": 63488 00:26:52.866 }, 00:26:52.866 { 00:26:52.866 "name": "BaseBdev3", 00:26:52.866 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:52.866 "is_configured": true, 00:26:52.866 "data_offset": 2048, 00:26:52.866 "data_size": 63488 00:26:52.866 }, 00:26:52.866 { 00:26:52.866 "name": "BaseBdev4", 00:26:52.866 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:52.866 "is_configured": true, 00:26:52.866 "data_offset": 2048, 00:26:52.866 "data_size": 63488 00:26:52.866 } 00:26:52.866 ] 00:26:52.866 }' 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.866 09:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.434 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:53.434 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.434 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:53.434 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:53.434 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.693 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0f02bf12-4290-11ef-a0af-c98d8ee52a94 00:26:53.952 [2024-07-15 09:53:21.830389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:53.952 [2024-07-15 09:53:21.830437] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x133049e34f00 00:26:53.952 [2024-07-15 09:53:21.830442] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:53.952 [2024-07-15 09:53:21.830458] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x133049e97e20 00:26:53.952 [2024-07-15 09:53:21.830495] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x133049e34f00 00:26:53.952 [2024-07-15 09:53:21.830498] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x133049e34f00 00:26:53.952 [2024-07-15 09:53:21.830515] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.952 NewBaseBdev 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:53.952 09:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:54.211 09:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:54.211 [ 00:26:54.211 { 00:26:54.211 "name": "NewBaseBdev", 00:26:54.211 "aliases": [ 00:26:54.211 "0f02bf12-4290-11ef-a0af-c98d8ee52a94" 00:26:54.211 ], 00:26:54.211 "product_name": "Malloc disk", 00:26:54.211 "block_size": 512, 00:26:54.211 "num_blocks": 65536, 00:26:54.211 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:54.211 "assigned_rate_limits": { 00:26:54.211 "rw_ios_per_sec": 0, 00:26:54.211 "rw_mbytes_per_sec": 0, 00:26:54.211 "r_mbytes_per_sec": 0, 00:26:54.211 "w_mbytes_per_sec": 0 00:26:54.211 }, 00:26:54.211 "claimed": true, 00:26:54.211 "claim_type": "exclusive_write", 00:26:54.211 "zoned": false, 00:26:54.211 "supported_io_types": { 00:26:54.211 "read": true, 00:26:54.211 "write": true, 00:26:54.211 "unmap": true, 00:26:54.211 "flush": true, 00:26:54.211 "reset": true, 00:26:54.211 "nvme_admin": false, 00:26:54.211 "nvme_io": false, 00:26:54.211 "nvme_io_md": false, 00:26:54.211 "write_zeroes": true, 00:26:54.211 "zcopy": true, 00:26:54.211 "get_zone_info": false, 00:26:54.211 "zone_management": false, 00:26:54.211 "zone_append": false, 00:26:54.211 "compare": false, 00:26:54.211 "compare_and_write": false, 00:26:54.211 "abort": true, 00:26:54.211 "seek_hole": false, 00:26:54.211 "seek_data": false, 00:26:54.211 "copy": true, 00:26:54.211 "nvme_iov_md": false 00:26:54.211 }, 00:26:54.211 "memory_domains": [ 00:26:54.211 { 00:26:54.211 "dma_device_id": "system", 00:26:54.211 "dma_device_type": 1 00:26:54.211 }, 00:26:54.211 { 00:26:54.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.211 "dma_device_type": 2 00:26:54.211 } 00:26:54.211 ], 00:26:54.211 "driver_specific": {} 00:26:54.211 } 00:26:54.211 ] 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.469 "name": "Existed_Raid", 00:26:54.469 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:54.469 "strip_size_kb": 0, 00:26:54.469 "state": "online", 00:26:54.469 "raid_level": "raid1", 00:26:54.469 "superblock": true, 00:26:54.469 "num_base_bdevs": 4, 00:26:54.469 "num_base_bdevs_discovered": 4, 00:26:54.469 "num_base_bdevs_operational": 4, 00:26:54.469 "base_bdevs_list": [ 00:26:54.469 { 00:26:54.469 "name": "NewBaseBdev", 00:26:54.469 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:54.469 "is_configured": true, 00:26:54.469 "data_offset": 2048, 00:26:54.469 "data_size": 63488 00:26:54.469 }, 00:26:54.469 { 00:26:54.469 "name": "BaseBdev2", 00:26:54.469 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:54.469 "is_configured": true, 00:26:54.469 "data_offset": 2048, 00:26:54.469 "data_size": 63488 00:26:54.469 }, 00:26:54.469 { 00:26:54.469 "name": "BaseBdev3", 00:26:54.469 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:54.469 "is_configured": true, 00:26:54.469 "data_offset": 2048, 00:26:54.469 "data_size": 63488 00:26:54.469 }, 00:26:54.469 { 00:26:54.469 "name": "BaseBdev4", 00:26:54.469 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:54.469 "is_configured": true, 00:26:54.469 "data_offset": 2048, 00:26:54.469 "data_size": 63488 00:26:54.469 } 00:26:54.469 ] 00:26:54.469 }' 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.469 09:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:54.729 09:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:55.013 [2024-07-15 09:53:23.014357] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.013 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:55.013 "name": "Existed_Raid", 00:26:55.013 "aliases": [ 00:26:55.013 "0df4f3f6-4290-11ef-a0af-c98d8ee52a94" 00:26:55.013 ], 00:26:55.013 "product_name": "Raid Volume", 00:26:55.013 "block_size": 512, 00:26:55.013 "num_blocks": 63488, 00:26:55.013 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:55.013 "assigned_rate_limits": { 00:26:55.013 "rw_ios_per_sec": 0, 00:26:55.013 "rw_mbytes_per_sec": 0, 00:26:55.013 "r_mbytes_per_sec": 0, 00:26:55.013 "w_mbytes_per_sec": 0 00:26:55.013 }, 00:26:55.013 "claimed": false, 00:26:55.013 "zoned": false, 00:26:55.013 "supported_io_types": { 00:26:55.013 "read": true, 00:26:55.013 "write": true, 00:26:55.013 "unmap": false, 00:26:55.013 "flush": false, 00:26:55.013 "reset": true, 00:26:55.013 "nvme_admin": false, 00:26:55.013 "nvme_io": false, 00:26:55.013 "nvme_io_md": false, 00:26:55.013 "write_zeroes": true, 00:26:55.013 "zcopy": false, 00:26:55.013 "get_zone_info": false, 00:26:55.013 "zone_management": false, 00:26:55.013 "zone_append": false, 00:26:55.013 "compare": false, 00:26:55.013 "compare_and_write": false, 00:26:55.013 "abort": false, 00:26:55.013 "seek_hole": false, 00:26:55.013 "seek_data": false, 00:26:55.013 "copy": false, 00:26:55.013 "nvme_iov_md": false 00:26:55.013 }, 00:26:55.013 "memory_domains": [ 00:26:55.013 { 00:26:55.013 "dma_device_id": "system", 00:26:55.013 "dma_device_type": 1 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.013 "dma_device_type": 2 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "system", 00:26:55.013 "dma_device_type": 1 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.013 "dma_device_type": 2 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "system", 00:26:55.013 "dma_device_type": 1 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.013 "dma_device_type": 2 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "system", 00:26:55.013 "dma_device_type": 1 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.013 "dma_device_type": 2 00:26:55.013 } 00:26:55.013 ], 00:26:55.013 "driver_specific": { 00:26:55.013 "raid": { 00:26:55.013 "uuid": "0df4f3f6-4290-11ef-a0af-c98d8ee52a94", 00:26:55.013 "strip_size_kb": 0, 00:26:55.013 "state": "online", 00:26:55.013 "raid_level": "raid1", 00:26:55.013 "superblock": true, 00:26:55.013 "num_base_bdevs": 4, 00:26:55.013 "num_base_bdevs_discovered": 4, 00:26:55.013 "num_base_bdevs_operational": 4, 00:26:55.013 "base_bdevs_list": [ 00:26:55.013 { 00:26:55.013 "name": "NewBaseBdev", 00:26:55.013 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:55.013 "is_configured": true, 00:26:55.013 "data_offset": 2048, 00:26:55.013 "data_size": 63488 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "name": "BaseBdev2", 00:26:55.013 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:55.013 "is_configured": true, 00:26:55.013 "data_offset": 2048, 00:26:55.013 "data_size": 63488 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "name": "BaseBdev3", 00:26:55.013 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:55.013 "is_configured": true, 00:26:55.013 "data_offset": 2048, 00:26:55.013 "data_size": 63488 00:26:55.013 }, 00:26:55.013 { 00:26:55.013 "name": "BaseBdev4", 00:26:55.013 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:55.013 "is_configured": true, 00:26:55.013 "data_offset": 2048, 00:26:55.013 "data_size": 63488 00:26:55.013 } 00:26:55.013 ] 00:26:55.013 } 00:26:55.013 } 00:26:55.013 }' 00:26:55.013 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:55.013 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:55.013 BaseBdev2 00:26:55.013 BaseBdev3 00:26:55.013 BaseBdev4' 00:26:55.013 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.013 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:55.013 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:55.290 "name": "NewBaseBdev", 00:26:55.290 "aliases": [ 00:26:55.290 "0f02bf12-4290-11ef-a0af-c98d8ee52a94" 00:26:55.290 ], 00:26:55.290 "product_name": "Malloc disk", 00:26:55.290 "block_size": 512, 00:26:55.290 "num_blocks": 65536, 00:26:55.290 "uuid": "0f02bf12-4290-11ef-a0af-c98d8ee52a94", 00:26:55.290 "assigned_rate_limits": { 00:26:55.290 "rw_ios_per_sec": 0, 00:26:55.290 "rw_mbytes_per_sec": 0, 00:26:55.290 "r_mbytes_per_sec": 0, 00:26:55.290 "w_mbytes_per_sec": 0 00:26:55.290 }, 00:26:55.290 "claimed": true, 00:26:55.290 "claim_type": "exclusive_write", 00:26:55.290 "zoned": false, 00:26:55.290 "supported_io_types": { 00:26:55.290 "read": true, 00:26:55.290 "write": true, 00:26:55.290 "unmap": true, 00:26:55.290 "flush": true, 00:26:55.290 "reset": true, 00:26:55.290 "nvme_admin": false, 00:26:55.290 "nvme_io": false, 00:26:55.290 "nvme_io_md": false, 00:26:55.290 "write_zeroes": true, 00:26:55.290 "zcopy": true, 00:26:55.290 "get_zone_info": false, 00:26:55.290 "zone_management": false, 00:26:55.290 "zone_append": false, 00:26:55.290 "compare": false, 00:26:55.290 "compare_and_write": false, 00:26:55.290 "abort": true, 00:26:55.290 "seek_hole": false, 00:26:55.290 "seek_data": false, 00:26:55.290 "copy": true, 00:26:55.290 "nvme_iov_md": false 00:26:55.290 }, 00:26:55.290 "memory_domains": [ 00:26:55.290 { 00:26:55.290 "dma_device_id": "system", 00:26:55.290 "dma_device_type": 1 00:26:55.290 }, 00:26:55.290 { 00:26:55.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.290 "dma_device_type": 2 00:26:55.290 } 00:26:55.290 ], 00:26:55.290 "driver_specific": {} 00:26:55.290 }' 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.290 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.291 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:55.291 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:55.563 "name": "BaseBdev2", 00:26:55.563 "aliases": [ 00:26:55.563 "0cb9fe73-4290-11ef-a0af-c98d8ee52a94" 00:26:55.563 ], 00:26:55.563 "product_name": "Malloc disk", 00:26:55.563 "block_size": 512, 00:26:55.563 "num_blocks": 65536, 00:26:55.563 "uuid": "0cb9fe73-4290-11ef-a0af-c98d8ee52a94", 00:26:55.563 "assigned_rate_limits": { 00:26:55.563 "rw_ios_per_sec": 0, 00:26:55.563 "rw_mbytes_per_sec": 0, 00:26:55.563 "r_mbytes_per_sec": 0, 00:26:55.563 "w_mbytes_per_sec": 0 00:26:55.563 }, 00:26:55.563 "claimed": true, 00:26:55.563 "claim_type": "exclusive_write", 00:26:55.563 "zoned": false, 00:26:55.563 "supported_io_types": { 00:26:55.563 "read": true, 00:26:55.563 "write": true, 00:26:55.563 "unmap": true, 00:26:55.563 "flush": true, 00:26:55.563 "reset": true, 00:26:55.563 "nvme_admin": false, 00:26:55.563 "nvme_io": false, 00:26:55.563 "nvme_io_md": false, 00:26:55.563 "write_zeroes": true, 00:26:55.563 "zcopy": true, 00:26:55.563 "get_zone_info": false, 00:26:55.563 "zone_management": false, 00:26:55.563 "zone_append": false, 00:26:55.563 "compare": false, 00:26:55.563 "compare_and_write": false, 00:26:55.563 "abort": true, 00:26:55.563 "seek_hole": false, 00:26:55.563 "seek_data": false, 00:26:55.563 "copy": true, 00:26:55.563 "nvme_iov_md": false 00:26:55.563 }, 00:26:55.563 "memory_domains": [ 00:26:55.563 { 00:26:55.563 "dma_device_id": "system", 00:26:55.563 "dma_device_type": 1 00:26:55.563 }, 00:26:55.563 { 00:26:55.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.563 "dma_device_type": 2 00:26:55.563 } 00:26:55.563 ], 00:26:55.563 "driver_specific": {} 00:26:55.563 }' 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:55.563 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:55.822 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:55.822 "name": "BaseBdev3", 00:26:55.822 "aliases": [ 00:26:55.822 "0d1f50b3-4290-11ef-a0af-c98d8ee52a94" 00:26:55.822 ], 00:26:55.822 "product_name": "Malloc disk", 00:26:55.822 "block_size": 512, 00:26:55.823 "num_blocks": 65536, 00:26:55.823 "uuid": "0d1f50b3-4290-11ef-a0af-c98d8ee52a94", 00:26:55.823 "assigned_rate_limits": { 00:26:55.823 "rw_ios_per_sec": 0, 00:26:55.823 "rw_mbytes_per_sec": 0, 00:26:55.823 "r_mbytes_per_sec": 0, 00:26:55.823 "w_mbytes_per_sec": 0 00:26:55.823 }, 00:26:55.823 "claimed": true, 00:26:55.823 "claim_type": "exclusive_write", 00:26:55.823 "zoned": false, 00:26:55.823 "supported_io_types": { 00:26:55.823 "read": true, 00:26:55.823 "write": true, 00:26:55.823 "unmap": true, 00:26:55.823 "flush": true, 00:26:55.823 "reset": true, 00:26:55.823 "nvme_admin": false, 00:26:55.823 "nvme_io": false, 00:26:55.823 "nvme_io_md": false, 00:26:55.823 "write_zeroes": true, 00:26:55.823 "zcopy": true, 00:26:55.823 "get_zone_info": false, 00:26:55.823 "zone_management": false, 00:26:55.823 "zone_append": false, 00:26:55.823 "compare": false, 00:26:55.823 "compare_and_write": false, 00:26:55.823 "abort": true, 00:26:55.823 "seek_hole": false, 00:26:55.823 "seek_data": false, 00:26:55.823 "copy": true, 00:26:55.823 "nvme_iov_md": false 00:26:55.823 }, 00:26:55.823 "memory_domains": [ 00:26:55.823 { 00:26:55.823 "dma_device_id": "system", 00:26:55.823 "dma_device_type": 1 00:26:55.823 }, 00:26:55.823 { 00:26:55.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.823 "dma_device_type": 2 00:26:55.823 } 00:26:55.823 ], 00:26:55.823 "driver_specific": {} 00:26:55.823 }' 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:55.823 09:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:56.082 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:56.082 "name": "BaseBdev4", 00:26:56.082 "aliases": [ 00:26:56.082 "0d8fa09b-4290-11ef-a0af-c98d8ee52a94" 00:26:56.082 ], 00:26:56.082 "product_name": "Malloc disk", 00:26:56.082 "block_size": 512, 00:26:56.082 "num_blocks": 65536, 00:26:56.082 "uuid": "0d8fa09b-4290-11ef-a0af-c98d8ee52a94", 00:26:56.082 "assigned_rate_limits": { 00:26:56.082 "rw_ios_per_sec": 0, 00:26:56.082 "rw_mbytes_per_sec": 0, 00:26:56.082 "r_mbytes_per_sec": 0, 00:26:56.082 "w_mbytes_per_sec": 0 00:26:56.082 }, 00:26:56.083 "claimed": true, 00:26:56.083 "claim_type": "exclusive_write", 00:26:56.083 "zoned": false, 00:26:56.083 "supported_io_types": { 00:26:56.083 "read": true, 00:26:56.083 "write": true, 00:26:56.083 "unmap": true, 00:26:56.083 "flush": true, 00:26:56.083 "reset": true, 00:26:56.083 "nvme_admin": false, 00:26:56.083 "nvme_io": false, 00:26:56.083 "nvme_io_md": false, 00:26:56.083 "write_zeroes": true, 00:26:56.083 "zcopy": true, 00:26:56.083 "get_zone_info": false, 00:26:56.083 "zone_management": false, 00:26:56.083 "zone_append": false, 00:26:56.083 "compare": false, 00:26:56.083 "compare_and_write": false, 00:26:56.083 "abort": true, 00:26:56.083 "seek_hole": false, 00:26:56.083 "seek_data": false, 00:26:56.083 "copy": true, 00:26:56.083 "nvme_iov_md": false 00:26:56.083 }, 00:26:56.083 "memory_domains": [ 00:26:56.083 { 00:26:56.083 "dma_device_id": "system", 00:26:56.083 "dma_device_type": 1 00:26:56.083 }, 00:26:56.083 { 00:26:56.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.083 "dma_device_type": 2 00:26:56.083 } 00:26:56.083 ], 00:26:56.083 "driver_specific": {} 00:26:56.083 }' 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.083 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:56.342 [2024-07-15 09:53:24.378425] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:56.342 [2024-07-15 09:53:24.378455] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.342 [2024-07-15 09:53:24.378472] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.342 [2024-07-15 09:53:24.378578] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:56.342 [2024-07-15 09:53:24.378582] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x133049e34f00 name Existed_Raid, state offline 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63570 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63570 ']' 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63570 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63570 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:26:56.342 killing process with pid 63570 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63570' 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63570 00:26:56.342 [2024-07-15 09:53:24.409120] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:56.342 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63570 00:26:56.342 [2024-07-15 09:53:24.444014] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:56.601 09:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:56.601 00:26:56.601 real 0m23.889s 00:26:56.601 user 0m42.455s 00:26:56.601 sys 0m4.512s 00:26:56.601 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.601 09:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.601 ************************************ 00:26:56.601 END TEST raid_state_function_test_sb 00:26:56.601 ************************************ 00:26:56.860 09:53:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:56.860 09:53:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:56.860 09:53:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:56.860 09:53:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.860 09:53:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:56.860 ************************************ 00:26:56.860 START TEST raid_superblock_test 00:26:56.860 ************************************ 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64376 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64376 /var/tmp/spdk-raid.sock 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64376 ']' 00:26:56.860 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:56.861 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:56.861 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:56.861 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.861 09:53:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.861 [2024-07-15 09:53:24.776307] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:56.861 [2024-07-15 09:53:24.776629] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:26:57.427 EAL: TSC is not safe to use in SMP mode 00:26:57.427 EAL: TSC is not invariant 00:26:57.427 [2024-07-15 09:53:25.493114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.718 [2024-07-15 09:53:25.599145] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:57.718 [2024-07-15 09:53:25.601667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.718 [2024-07-15 09:53:25.602392] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.718 [2024-07-15 09:53:25.602404] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:57.718 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:57.976 malloc1 00:26:57.976 09:53:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:58.237 [2024-07-15 09:53:26.165379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:58.237 [2024-07-15 09:53:26.165450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.237 [2024-07-15 09:53:26.165459] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239834780 00:26:58.237 [2024-07-15 09:53:26.165467] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.237 [2024-07-15 09:53:26.166443] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.237 [2024-07-15 09:53:26.166471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:58.237 pt1 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.237 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:58.497 malloc2 00:26:58.497 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:58.756 [2024-07-15 09:53:26.617405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:58.756 [2024-07-15 09:53:26.617477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.756 [2024-07-15 09:53:26.617487] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239834c80 00:26:58.756 [2024-07-15 09:53:26.617494] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.756 [2024-07-15 09:53:26.618242] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.756 [2024-07-15 09:53:26.618269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:58.756 pt2 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:58.756 malloc3 00:26:58.756 09:53:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:59.017 [2024-07-15 09:53:27.021417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:59.018 [2024-07-15 09:53:27.021479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.018 [2024-07-15 09:53:27.021489] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835180 00:26:59.018 [2024-07-15 09:53:27.021496] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.018 [2024-07-15 09:53:27.022227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.018 [2024-07-15 09:53:27.022259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:59.018 pt3 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:59.018 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:59.282 malloc4 00:26:59.282 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:59.545 [2024-07-15 09:53:27.397437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:59.545 [2024-07-15 09:53:27.397503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.545 [2024-07-15 09:53:27.397512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835680 00:26:59.545 [2024-07-15 09:53:27.397519] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.545 [2024-07-15 09:53:27.398194] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.545 [2024-07-15 09:53:27.398223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:59.545 pt4 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:59.545 [2024-07-15 09:53:27.621449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:59.545 [2024-07-15 09:53:27.622077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:59.545 [2024-07-15 09:53:27.622096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:59.545 [2024-07-15 09:53:27.622107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:59.545 [2024-07-15 09:53:27.622159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0239835900 00:26:59.545 [2024-07-15 09:53:27.622165] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:59.545 [2024-07-15 09:53:27.622198] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0239897e20 00:26:59.545 [2024-07-15 09:53:27.622272] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0239835900 00:26:59.545 [2024-07-15 09:53:27.622275] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb0239835900 00:26:59.545 [2024-07-15 09:53:27.622295] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.545 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.809 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.809 "name": "raid_bdev1", 00:26:59.809 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:26:59.809 "strip_size_kb": 0, 00:26:59.809 "state": "online", 00:26:59.809 "raid_level": "raid1", 00:26:59.809 "superblock": true, 00:26:59.809 "num_base_bdevs": 4, 00:26:59.809 "num_base_bdevs_discovered": 4, 00:26:59.809 "num_base_bdevs_operational": 4, 00:26:59.809 "base_bdevs_list": [ 00:26:59.809 { 00:26:59.809 "name": "pt1", 00:26:59.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:59.809 "is_configured": true, 00:26:59.809 "data_offset": 2048, 00:26:59.809 "data_size": 63488 00:26:59.809 }, 00:26:59.809 { 00:26:59.809 "name": "pt2", 00:26:59.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.809 "is_configured": true, 00:26:59.809 "data_offset": 2048, 00:26:59.809 "data_size": 63488 00:26:59.809 }, 00:26:59.809 { 00:26:59.809 "name": "pt3", 00:26:59.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:59.809 "is_configured": true, 00:26:59.809 "data_offset": 2048, 00:26:59.809 "data_size": 63488 00:26:59.809 }, 00:26:59.809 { 00:26:59.809 "name": "pt4", 00:26:59.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:59.809 "is_configured": true, 00:26:59.809 "data_offset": 2048, 00:26:59.809 "data_size": 63488 00:26:59.809 } 00:26:59.809 ] 00:26:59.809 }' 00:26:59.809 09:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.809 09:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:00.076 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:00.346 [2024-07-15 09:53:28.321492] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:00.346 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:00.346 "name": "raid_bdev1", 00:27:00.346 "aliases": [ 00:27:00.346 "158f5884-4290-11ef-a0af-c98d8ee52a94" 00:27:00.346 ], 00:27:00.346 "product_name": "Raid Volume", 00:27:00.346 "block_size": 512, 00:27:00.346 "num_blocks": 63488, 00:27:00.346 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:00.346 "assigned_rate_limits": { 00:27:00.346 "rw_ios_per_sec": 0, 00:27:00.346 "rw_mbytes_per_sec": 0, 00:27:00.346 "r_mbytes_per_sec": 0, 00:27:00.346 "w_mbytes_per_sec": 0 00:27:00.346 }, 00:27:00.346 "claimed": false, 00:27:00.346 "zoned": false, 00:27:00.346 "supported_io_types": { 00:27:00.346 "read": true, 00:27:00.346 "write": true, 00:27:00.346 "unmap": false, 00:27:00.346 "flush": false, 00:27:00.346 "reset": true, 00:27:00.346 "nvme_admin": false, 00:27:00.346 "nvme_io": false, 00:27:00.346 "nvme_io_md": false, 00:27:00.346 "write_zeroes": true, 00:27:00.346 "zcopy": false, 00:27:00.346 "get_zone_info": false, 00:27:00.346 "zone_management": false, 00:27:00.346 "zone_append": false, 00:27:00.346 "compare": false, 00:27:00.346 "compare_and_write": false, 00:27:00.346 "abort": false, 00:27:00.346 "seek_hole": false, 00:27:00.346 "seek_data": false, 00:27:00.346 "copy": false, 00:27:00.346 "nvme_iov_md": false 00:27:00.346 }, 00:27:00.346 "memory_domains": [ 00:27:00.346 { 00:27:00.346 "dma_device_id": "system", 00:27:00.346 "dma_device_type": 1 00:27:00.346 }, 00:27:00.346 { 00:27:00.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.346 "dma_device_type": 2 00:27:00.346 }, 00:27:00.346 { 00:27:00.346 "dma_device_id": "system", 00:27:00.346 "dma_device_type": 1 00:27:00.346 }, 00:27:00.346 { 00:27:00.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.346 "dma_device_type": 2 00:27:00.346 }, 00:27:00.346 { 00:27:00.346 "dma_device_id": "system", 00:27:00.346 "dma_device_type": 1 00:27:00.346 }, 00:27:00.346 { 00:27:00.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.347 "dma_device_type": 2 00:27:00.347 }, 00:27:00.347 { 00:27:00.347 "dma_device_id": "system", 00:27:00.347 "dma_device_type": 1 00:27:00.347 }, 00:27:00.347 { 00:27:00.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.347 "dma_device_type": 2 00:27:00.347 } 00:27:00.347 ], 00:27:00.347 "driver_specific": { 00:27:00.347 "raid": { 00:27:00.347 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:00.347 "strip_size_kb": 0, 00:27:00.347 "state": "online", 00:27:00.347 "raid_level": "raid1", 00:27:00.347 "superblock": true, 00:27:00.347 "num_base_bdevs": 4, 00:27:00.347 "num_base_bdevs_discovered": 4, 00:27:00.347 "num_base_bdevs_operational": 4, 00:27:00.347 "base_bdevs_list": [ 00:27:00.347 { 00:27:00.347 "name": "pt1", 00:27:00.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:00.347 "is_configured": true, 00:27:00.347 "data_offset": 2048, 00:27:00.347 "data_size": 63488 00:27:00.347 }, 00:27:00.347 { 00:27:00.347 "name": "pt2", 00:27:00.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.347 "is_configured": true, 00:27:00.347 "data_offset": 2048, 00:27:00.347 "data_size": 63488 00:27:00.347 }, 00:27:00.347 { 00:27:00.347 "name": "pt3", 00:27:00.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:00.347 "is_configured": true, 00:27:00.347 "data_offset": 2048, 00:27:00.347 "data_size": 63488 00:27:00.347 }, 00:27:00.347 { 00:27:00.347 "name": "pt4", 00:27:00.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:00.347 "is_configured": true, 00:27:00.347 "data_offset": 2048, 00:27:00.347 "data_size": 63488 00:27:00.347 } 00:27:00.347 ] 00:27:00.347 } 00:27:00.347 } 00:27:00.347 }' 00:27:00.347 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:00.347 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:00.347 pt2 00:27:00.347 pt3 00:27:00.347 pt4' 00:27:00.347 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:00.347 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:00.347 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:00.619 "name": "pt1", 00:27:00.619 "aliases": [ 00:27:00.619 "00000000-0000-0000-0000-000000000001" 00:27:00.619 ], 00:27:00.619 "product_name": "passthru", 00:27:00.619 "block_size": 512, 00:27:00.619 "num_blocks": 65536, 00:27:00.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:00.619 "assigned_rate_limits": { 00:27:00.619 "rw_ios_per_sec": 0, 00:27:00.619 "rw_mbytes_per_sec": 0, 00:27:00.619 "r_mbytes_per_sec": 0, 00:27:00.619 "w_mbytes_per_sec": 0 00:27:00.619 }, 00:27:00.619 "claimed": true, 00:27:00.619 "claim_type": "exclusive_write", 00:27:00.619 "zoned": false, 00:27:00.619 "supported_io_types": { 00:27:00.619 "read": true, 00:27:00.619 "write": true, 00:27:00.619 "unmap": true, 00:27:00.619 "flush": true, 00:27:00.619 "reset": true, 00:27:00.619 "nvme_admin": false, 00:27:00.619 "nvme_io": false, 00:27:00.619 "nvme_io_md": false, 00:27:00.619 "write_zeroes": true, 00:27:00.619 "zcopy": true, 00:27:00.619 "get_zone_info": false, 00:27:00.619 "zone_management": false, 00:27:00.619 "zone_append": false, 00:27:00.619 "compare": false, 00:27:00.619 "compare_and_write": false, 00:27:00.619 "abort": true, 00:27:00.619 "seek_hole": false, 00:27:00.619 "seek_data": false, 00:27:00.619 "copy": true, 00:27:00.619 "nvme_iov_md": false 00:27:00.619 }, 00:27:00.619 "memory_domains": [ 00:27:00.619 { 00:27:00.619 "dma_device_id": "system", 00:27:00.619 "dma_device_type": 1 00:27:00.619 }, 00:27:00.619 { 00:27:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.619 "dma_device_type": 2 00:27:00.619 } 00:27:00.619 ], 00:27:00.619 "driver_specific": { 00:27:00.619 "passthru": { 00:27:00.619 "name": "pt1", 00:27:00.619 "base_bdev_name": "malloc1" 00:27:00.619 } 00:27:00.619 } 00:27:00.619 }' 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:00.619 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:00.888 "name": "pt2", 00:27:00.888 "aliases": [ 00:27:00.888 "00000000-0000-0000-0000-000000000002" 00:27:00.888 ], 00:27:00.888 "product_name": "passthru", 00:27:00.888 "block_size": 512, 00:27:00.888 "num_blocks": 65536, 00:27:00.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.888 "assigned_rate_limits": { 00:27:00.888 "rw_ios_per_sec": 0, 00:27:00.888 "rw_mbytes_per_sec": 0, 00:27:00.888 "r_mbytes_per_sec": 0, 00:27:00.888 "w_mbytes_per_sec": 0 00:27:00.888 }, 00:27:00.888 "claimed": true, 00:27:00.888 "claim_type": "exclusive_write", 00:27:00.888 "zoned": false, 00:27:00.888 "supported_io_types": { 00:27:00.888 "read": true, 00:27:00.888 "write": true, 00:27:00.888 "unmap": true, 00:27:00.888 "flush": true, 00:27:00.888 "reset": true, 00:27:00.888 "nvme_admin": false, 00:27:00.888 "nvme_io": false, 00:27:00.888 "nvme_io_md": false, 00:27:00.888 "write_zeroes": true, 00:27:00.888 "zcopy": true, 00:27:00.888 "get_zone_info": false, 00:27:00.888 "zone_management": false, 00:27:00.888 "zone_append": false, 00:27:00.888 "compare": false, 00:27:00.888 "compare_and_write": false, 00:27:00.888 "abort": true, 00:27:00.888 "seek_hole": false, 00:27:00.888 "seek_data": false, 00:27:00.888 "copy": true, 00:27:00.888 "nvme_iov_md": false 00:27:00.888 }, 00:27:00.888 "memory_domains": [ 00:27:00.888 { 00:27:00.888 "dma_device_id": "system", 00:27:00.888 "dma_device_type": 1 00:27:00.888 }, 00:27:00.888 { 00:27:00.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.888 "dma_device_type": 2 00:27:00.888 } 00:27:00.888 ], 00:27:00.888 "driver_specific": { 00:27:00.888 "passthru": { 00:27:00.888 "name": "pt2", 00:27:00.888 "base_bdev_name": "malloc2" 00:27:00.888 } 00:27:00.888 } 00:27:00.888 }' 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:00.888 09:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:01.150 "name": "pt3", 00:27:01.150 "aliases": [ 00:27:01.150 "00000000-0000-0000-0000-000000000003" 00:27:01.150 ], 00:27:01.150 "product_name": "passthru", 00:27:01.150 "block_size": 512, 00:27:01.150 "num_blocks": 65536, 00:27:01.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:01.150 "assigned_rate_limits": { 00:27:01.150 "rw_ios_per_sec": 0, 00:27:01.150 "rw_mbytes_per_sec": 0, 00:27:01.150 "r_mbytes_per_sec": 0, 00:27:01.150 "w_mbytes_per_sec": 0 00:27:01.150 }, 00:27:01.150 "claimed": true, 00:27:01.150 "claim_type": "exclusive_write", 00:27:01.150 "zoned": false, 00:27:01.150 "supported_io_types": { 00:27:01.150 "read": true, 00:27:01.150 "write": true, 00:27:01.150 "unmap": true, 00:27:01.150 "flush": true, 00:27:01.150 "reset": true, 00:27:01.150 "nvme_admin": false, 00:27:01.150 "nvme_io": false, 00:27:01.150 "nvme_io_md": false, 00:27:01.150 "write_zeroes": true, 00:27:01.150 "zcopy": true, 00:27:01.150 "get_zone_info": false, 00:27:01.150 "zone_management": false, 00:27:01.150 "zone_append": false, 00:27:01.150 "compare": false, 00:27:01.150 "compare_and_write": false, 00:27:01.150 "abort": true, 00:27:01.150 "seek_hole": false, 00:27:01.150 "seek_data": false, 00:27:01.150 "copy": true, 00:27:01.150 "nvme_iov_md": false 00:27:01.150 }, 00:27:01.150 "memory_domains": [ 00:27:01.150 { 00:27:01.150 "dma_device_id": "system", 00:27:01.150 "dma_device_type": 1 00:27:01.150 }, 00:27:01.150 { 00:27:01.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.150 "dma_device_type": 2 00:27:01.150 } 00:27:01.150 ], 00:27:01.150 "driver_specific": { 00:27:01.150 "passthru": { 00:27:01.150 "name": "pt3", 00:27:01.150 "base_bdev_name": "malloc3" 00:27:01.150 } 00:27:01.150 } 00:27:01.150 }' 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:01.150 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:01.410 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:01.410 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:01.410 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:01.410 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:01.410 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:01.410 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:01.410 "name": "pt4", 00:27:01.410 "aliases": [ 00:27:01.410 "00000000-0000-0000-0000-000000000004" 00:27:01.410 ], 00:27:01.410 "product_name": "passthru", 00:27:01.410 "block_size": 512, 00:27:01.410 "num_blocks": 65536, 00:27:01.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:01.410 "assigned_rate_limits": { 00:27:01.410 "rw_ios_per_sec": 0, 00:27:01.410 "rw_mbytes_per_sec": 0, 00:27:01.410 "r_mbytes_per_sec": 0, 00:27:01.410 "w_mbytes_per_sec": 0 00:27:01.410 }, 00:27:01.410 "claimed": true, 00:27:01.410 "claim_type": "exclusive_write", 00:27:01.410 "zoned": false, 00:27:01.410 "supported_io_types": { 00:27:01.410 "read": true, 00:27:01.410 "write": true, 00:27:01.410 "unmap": true, 00:27:01.410 "flush": true, 00:27:01.410 "reset": true, 00:27:01.410 "nvme_admin": false, 00:27:01.410 "nvme_io": false, 00:27:01.410 "nvme_io_md": false, 00:27:01.410 "write_zeroes": true, 00:27:01.410 "zcopy": true, 00:27:01.410 "get_zone_info": false, 00:27:01.410 "zone_management": false, 00:27:01.410 "zone_append": false, 00:27:01.410 "compare": false, 00:27:01.410 "compare_and_write": false, 00:27:01.410 "abort": true, 00:27:01.410 "seek_hole": false, 00:27:01.410 "seek_data": false, 00:27:01.410 "copy": true, 00:27:01.410 "nvme_iov_md": false 00:27:01.410 }, 00:27:01.410 "memory_domains": [ 00:27:01.410 { 00:27:01.410 "dma_device_id": "system", 00:27:01.410 "dma_device_type": 1 00:27:01.410 }, 00:27:01.410 { 00:27:01.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.410 "dma_device_type": 2 00:27:01.410 } 00:27:01.410 ], 00:27:01.410 "driver_specific": { 00:27:01.410 "passthru": { 00:27:01.411 "name": "pt4", 00:27:01.411 "base_bdev_name": "malloc4" 00:27:01.411 } 00:27:01.411 } 00:27:01.411 }' 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:01.411 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:01.670 [2024-07-15 09:53:29.721602] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=158f5884-4290-11ef-a0af-c98d8ee52a94 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 158f5884-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:01.670 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:01.929 [2024-07-15 09:53:29.917561] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:01.929 [2024-07-15 09:53:29.917575] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:01.929 [2024-07-15 09:53:29.917591] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:01.929 [2024-07-15 09:53:29.917612] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:01.929 [2024-07-15 09:53:29.917616] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0239835900 name raid_bdev1, state offline 00:27:01.929 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.929 09:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:02.188 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:02.188 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:02.188 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.188 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:02.446 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.446 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:02.446 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.446 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:02.706 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:02.706 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:02.991 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:02.991 09:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:03.249 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:03.249 [2024-07-15 09:53:31.341672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:03.249 [2024-07-15 09:53:31.342426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:03.250 [2024-07-15 09:53:31.342449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:03.250 [2024-07-15 09:53:31.342458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:03.250 [2024-07-15 09:53:31.342473] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:03.250 [2024-07-15 09:53:31.342514] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:03.250 [2024-07-15 09:53:31.342523] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:03.250 [2024-07-15 09:53:31.342530] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:03.250 [2024-07-15 09:53:31.342538] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:03.250 [2024-07-15 09:53:31.342542] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0239835680 name raid_bdev1, state configuring 00:27:03.250 request: 00:27:03.250 { 00:27:03.250 "name": "raid_bdev1", 00:27:03.250 "raid_level": "raid1", 00:27:03.250 "base_bdevs": [ 00:27:03.250 "malloc1", 00:27:03.250 "malloc2", 00:27:03.250 "malloc3", 00:27:03.250 "malloc4" 00:27:03.250 ], 00:27:03.250 "superblock": false, 00:27:03.250 "method": "bdev_raid_create", 00:27:03.250 "req_id": 1 00:27:03.250 } 00:27:03.250 Got JSON-RPC error response 00:27:03.250 response: 00:27:03.250 { 00:27:03.250 "code": -17, 00:27:03.250 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:03.250 } 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:03.508 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:03.768 [2024-07-15 09:53:31.777689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:03.768 [2024-07-15 09:53:31.777748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.768 [2024-07-15 09:53:31.777757] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835180 00:27:03.768 [2024-07-15 09:53:31.777765] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.768 [2024-07-15 09:53:31.778494] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.768 [2024-07-15 09:53:31.778525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:03.768 [2024-07-15 09:53:31.778545] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:03.768 [2024-07-15 09:53:31.778555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:03.768 pt1 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.768 09:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.026 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.026 "name": "raid_bdev1", 00:27:04.026 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:04.026 "strip_size_kb": 0, 00:27:04.026 "state": "configuring", 00:27:04.026 "raid_level": "raid1", 00:27:04.026 "superblock": true, 00:27:04.026 "num_base_bdevs": 4, 00:27:04.026 "num_base_bdevs_discovered": 1, 00:27:04.026 "num_base_bdevs_operational": 4, 00:27:04.026 "base_bdevs_list": [ 00:27:04.026 { 00:27:04.026 "name": "pt1", 00:27:04.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:04.026 "is_configured": true, 00:27:04.026 "data_offset": 2048, 00:27:04.026 "data_size": 63488 00:27:04.026 }, 00:27:04.026 { 00:27:04.026 "name": null, 00:27:04.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.026 "is_configured": false, 00:27:04.026 "data_offset": 2048, 00:27:04.026 "data_size": 63488 00:27:04.026 }, 00:27:04.026 { 00:27:04.026 "name": null, 00:27:04.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.026 "is_configured": false, 00:27:04.026 "data_offset": 2048, 00:27:04.026 "data_size": 63488 00:27:04.026 }, 00:27:04.026 { 00:27:04.026 "name": null, 00:27:04.026 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:04.026 "is_configured": false, 00:27:04.026 "data_offset": 2048, 00:27:04.026 "data_size": 63488 00:27:04.026 } 00:27:04.026 ] 00:27:04.026 }' 00:27:04.026 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.026 09:53:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.286 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:27:04.286 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:04.544 [2024-07-15 09:53:32.537734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:04.544 [2024-07-15 09:53:32.537791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.544 [2024-07-15 09:53:32.537800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239834780 00:27:04.544 [2024-07-15 09:53:32.537806] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.544 [2024-07-15 09:53:32.537906] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.544 [2024-07-15 09:53:32.537913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:04.544 [2024-07-15 09:53:32.537928] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:04.545 [2024-07-15 09:53:32.537935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:04.545 pt2 00:27:04.545 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:04.828 [2024-07-15 09:53:32.749756] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.828 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.088 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.088 "name": "raid_bdev1", 00:27:05.088 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:05.088 "strip_size_kb": 0, 00:27:05.088 "state": "configuring", 00:27:05.088 "raid_level": "raid1", 00:27:05.088 "superblock": true, 00:27:05.088 "num_base_bdevs": 4, 00:27:05.088 "num_base_bdevs_discovered": 1, 00:27:05.088 "num_base_bdevs_operational": 4, 00:27:05.088 "base_bdevs_list": [ 00:27:05.088 { 00:27:05.088 "name": "pt1", 00:27:05.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:05.088 "is_configured": true, 00:27:05.088 "data_offset": 2048, 00:27:05.088 "data_size": 63488 00:27:05.088 }, 00:27:05.088 { 00:27:05.088 "name": null, 00:27:05.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:05.088 "is_configured": false, 00:27:05.088 "data_offset": 2048, 00:27:05.088 "data_size": 63488 00:27:05.088 }, 00:27:05.088 { 00:27:05.088 "name": null, 00:27:05.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:05.088 "is_configured": false, 00:27:05.088 "data_offset": 2048, 00:27:05.088 "data_size": 63488 00:27:05.088 }, 00:27:05.088 { 00:27:05.088 "name": null, 00:27:05.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:05.088 "is_configured": false, 00:27:05.088 "data_offset": 2048, 00:27:05.088 "data_size": 63488 00:27:05.088 } 00:27:05.088 ] 00:27:05.088 }' 00:27:05.088 09:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.088 09:53:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:05.346 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:05.346 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:05.604 [2024-07-15 09:53:33.493776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:05.604 [2024-07-15 09:53:33.493843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.604 [2024-07-15 09:53:33.493852] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239834780 00:27:05.604 [2024-07-15 09:53:33.493859] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.604 [2024-07-15 09:53:33.493970] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.604 [2024-07-15 09:53:33.493978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:05.604 [2024-07-15 09:53:33.493997] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:05.604 [2024-07-15 09:53:33.494004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:05.604 pt2 00:27:05.604 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:05.604 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:05.604 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:05.861 [2024-07-15 09:53:33.717793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:05.861 [2024-07-15 09:53:33.717847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.861 [2024-07-15 09:53:33.717855] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835b80 00:27:05.861 [2024-07-15 09:53:33.717862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.861 [2024-07-15 09:53:33.717940] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.861 [2024-07-15 09:53:33.717947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:05.861 [2024-07-15 09:53:33.717961] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:05.861 [2024-07-15 09:53:33.717967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:05.861 pt3 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:05.861 [2024-07-15 09:53:33.913809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:05.861 [2024-07-15 09:53:33.913855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.861 [2024-07-15 09:53:33.913863] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835900 00:27:05.861 [2024-07-15 09:53:33.913869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.861 [2024-07-15 09:53:33.913942] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.861 [2024-07-15 09:53:33.913950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:05.861 [2024-07-15 09:53:33.913965] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:05.861 [2024-07-15 09:53:33.913971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:05.861 [2024-07-15 09:53:33.913996] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0239834c80 00:27:05.861 [2024-07-15 09:53:33.914000] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:05.861 [2024-07-15 09:53:33.914018] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0239897e20 00:27:05.861 [2024-07-15 09:53:33.914067] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0239834c80 00:27:05.861 [2024-07-15 09:53:33.914070] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb0239834c80 00:27:05.861 [2024-07-15 09:53:33.914091] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.861 pt4 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.861 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.862 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.862 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.862 09:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.118 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.118 "name": "raid_bdev1", 00:27:06.118 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:06.118 "strip_size_kb": 0, 00:27:06.118 "state": "online", 00:27:06.118 "raid_level": "raid1", 00:27:06.118 "superblock": true, 00:27:06.118 "num_base_bdevs": 4, 00:27:06.118 "num_base_bdevs_discovered": 4, 00:27:06.118 "num_base_bdevs_operational": 4, 00:27:06.118 "base_bdevs_list": [ 00:27:06.118 { 00:27:06.118 "name": "pt1", 00:27:06.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:06.118 "is_configured": true, 00:27:06.118 "data_offset": 2048, 00:27:06.118 "data_size": 63488 00:27:06.118 }, 00:27:06.118 { 00:27:06.118 "name": "pt2", 00:27:06.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:06.118 "is_configured": true, 00:27:06.118 "data_offset": 2048, 00:27:06.118 "data_size": 63488 00:27:06.118 }, 00:27:06.118 { 00:27:06.118 "name": "pt3", 00:27:06.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:06.119 "is_configured": true, 00:27:06.119 "data_offset": 2048, 00:27:06.119 "data_size": 63488 00:27:06.119 }, 00:27:06.119 { 00:27:06.119 "name": "pt4", 00:27:06.119 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:06.119 "is_configured": true, 00:27:06.119 "data_offset": 2048, 00:27:06.119 "data_size": 63488 00:27:06.119 } 00:27:06.119 ] 00:27:06.119 }' 00:27:06.119 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.119 09:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:06.401 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:06.660 [2024-07-15 09:53:34.625901] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:06.660 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:06.660 "name": "raid_bdev1", 00:27:06.660 "aliases": [ 00:27:06.660 "158f5884-4290-11ef-a0af-c98d8ee52a94" 00:27:06.660 ], 00:27:06.660 "product_name": "Raid Volume", 00:27:06.660 "block_size": 512, 00:27:06.660 "num_blocks": 63488, 00:27:06.660 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:06.660 "assigned_rate_limits": { 00:27:06.660 "rw_ios_per_sec": 0, 00:27:06.660 "rw_mbytes_per_sec": 0, 00:27:06.660 "r_mbytes_per_sec": 0, 00:27:06.660 "w_mbytes_per_sec": 0 00:27:06.660 }, 00:27:06.660 "claimed": false, 00:27:06.660 "zoned": false, 00:27:06.660 "supported_io_types": { 00:27:06.660 "read": true, 00:27:06.660 "write": true, 00:27:06.660 "unmap": false, 00:27:06.660 "flush": false, 00:27:06.660 "reset": true, 00:27:06.660 "nvme_admin": false, 00:27:06.660 "nvme_io": false, 00:27:06.660 "nvme_io_md": false, 00:27:06.660 "write_zeroes": true, 00:27:06.660 "zcopy": false, 00:27:06.660 "get_zone_info": false, 00:27:06.660 "zone_management": false, 00:27:06.660 "zone_append": false, 00:27:06.660 "compare": false, 00:27:06.660 "compare_and_write": false, 00:27:06.660 "abort": false, 00:27:06.660 "seek_hole": false, 00:27:06.660 "seek_data": false, 00:27:06.660 "copy": false, 00:27:06.660 "nvme_iov_md": false 00:27:06.660 }, 00:27:06.660 "memory_domains": [ 00:27:06.660 { 00:27:06.660 "dma_device_id": "system", 00:27:06.660 "dma_device_type": 1 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.660 "dma_device_type": 2 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "system", 00:27:06.660 "dma_device_type": 1 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.660 "dma_device_type": 2 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "system", 00:27:06.660 "dma_device_type": 1 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.660 "dma_device_type": 2 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "system", 00:27:06.660 "dma_device_type": 1 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.660 "dma_device_type": 2 00:27:06.660 } 00:27:06.660 ], 00:27:06.660 "driver_specific": { 00:27:06.660 "raid": { 00:27:06.660 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:06.660 "strip_size_kb": 0, 00:27:06.660 "state": "online", 00:27:06.660 "raid_level": "raid1", 00:27:06.660 "superblock": true, 00:27:06.660 "num_base_bdevs": 4, 00:27:06.660 "num_base_bdevs_discovered": 4, 00:27:06.660 "num_base_bdevs_operational": 4, 00:27:06.660 "base_bdevs_list": [ 00:27:06.660 { 00:27:06.660 "name": "pt1", 00:27:06.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:06.660 "is_configured": true, 00:27:06.660 "data_offset": 2048, 00:27:06.660 "data_size": 63488 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "name": "pt2", 00:27:06.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:06.660 "is_configured": true, 00:27:06.660 "data_offset": 2048, 00:27:06.660 "data_size": 63488 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "name": "pt3", 00:27:06.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:06.660 "is_configured": true, 00:27:06.660 "data_offset": 2048, 00:27:06.660 "data_size": 63488 00:27:06.660 }, 00:27:06.660 { 00:27:06.660 "name": "pt4", 00:27:06.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:06.660 "is_configured": true, 00:27:06.660 "data_offset": 2048, 00:27:06.660 "data_size": 63488 00:27:06.660 } 00:27:06.660 ] 00:27:06.660 } 00:27:06.660 } 00:27:06.660 }' 00:27:06.660 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:06.660 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:06.660 pt2 00:27:06.660 pt3 00:27:06.660 pt4' 00:27:06.660 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:06.660 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:06.660 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:06.918 "name": "pt1", 00:27:06.918 "aliases": [ 00:27:06.918 "00000000-0000-0000-0000-000000000001" 00:27:06.918 ], 00:27:06.918 "product_name": "passthru", 00:27:06.918 "block_size": 512, 00:27:06.918 "num_blocks": 65536, 00:27:06.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:06.918 "assigned_rate_limits": { 00:27:06.918 "rw_ios_per_sec": 0, 00:27:06.918 "rw_mbytes_per_sec": 0, 00:27:06.918 "r_mbytes_per_sec": 0, 00:27:06.918 "w_mbytes_per_sec": 0 00:27:06.918 }, 00:27:06.918 "claimed": true, 00:27:06.918 "claim_type": "exclusive_write", 00:27:06.918 "zoned": false, 00:27:06.918 "supported_io_types": { 00:27:06.918 "read": true, 00:27:06.918 "write": true, 00:27:06.918 "unmap": true, 00:27:06.918 "flush": true, 00:27:06.918 "reset": true, 00:27:06.918 "nvme_admin": false, 00:27:06.918 "nvme_io": false, 00:27:06.918 "nvme_io_md": false, 00:27:06.918 "write_zeroes": true, 00:27:06.918 "zcopy": true, 00:27:06.918 "get_zone_info": false, 00:27:06.918 "zone_management": false, 00:27:06.918 "zone_append": false, 00:27:06.918 "compare": false, 00:27:06.918 "compare_and_write": false, 00:27:06.918 "abort": true, 00:27:06.918 "seek_hole": false, 00:27:06.918 "seek_data": false, 00:27:06.918 "copy": true, 00:27:06.918 "nvme_iov_md": false 00:27:06.918 }, 00:27:06.918 "memory_domains": [ 00:27:06.918 { 00:27:06.918 "dma_device_id": "system", 00:27:06.918 "dma_device_type": 1 00:27:06.918 }, 00:27:06.918 { 00:27:06.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.918 "dma_device_type": 2 00:27:06.918 } 00:27:06.918 ], 00:27:06.918 "driver_specific": { 00:27:06.918 "passthru": { 00:27:06.918 "name": "pt1", 00:27:06.918 "base_bdev_name": "malloc1" 00:27:06.918 } 00:27:06.918 } 00:27:06.918 }' 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:06.918 09:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:07.177 "name": "pt2", 00:27:07.177 "aliases": [ 00:27:07.177 "00000000-0000-0000-0000-000000000002" 00:27:07.177 ], 00:27:07.177 "product_name": "passthru", 00:27:07.177 "block_size": 512, 00:27:07.177 "num_blocks": 65536, 00:27:07.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.177 "assigned_rate_limits": { 00:27:07.177 "rw_ios_per_sec": 0, 00:27:07.177 "rw_mbytes_per_sec": 0, 00:27:07.177 "r_mbytes_per_sec": 0, 00:27:07.177 "w_mbytes_per_sec": 0 00:27:07.177 }, 00:27:07.177 "claimed": true, 00:27:07.177 "claim_type": "exclusive_write", 00:27:07.177 "zoned": false, 00:27:07.177 "supported_io_types": { 00:27:07.177 "read": true, 00:27:07.177 "write": true, 00:27:07.177 "unmap": true, 00:27:07.177 "flush": true, 00:27:07.177 "reset": true, 00:27:07.177 "nvme_admin": false, 00:27:07.177 "nvme_io": false, 00:27:07.177 "nvme_io_md": false, 00:27:07.177 "write_zeroes": true, 00:27:07.177 "zcopy": true, 00:27:07.177 "get_zone_info": false, 00:27:07.177 "zone_management": false, 00:27:07.177 "zone_append": false, 00:27:07.177 "compare": false, 00:27:07.177 "compare_and_write": false, 00:27:07.177 "abort": true, 00:27:07.177 "seek_hole": false, 00:27:07.177 "seek_data": false, 00:27:07.177 "copy": true, 00:27:07.177 "nvme_iov_md": false 00:27:07.177 }, 00:27:07.177 "memory_domains": [ 00:27:07.177 { 00:27:07.177 "dma_device_id": "system", 00:27:07.177 "dma_device_type": 1 00:27:07.177 }, 00:27:07.177 { 00:27:07.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.177 "dma_device_type": 2 00:27:07.177 } 00:27:07.177 ], 00:27:07.177 "driver_specific": { 00:27:07.177 "passthru": { 00:27:07.177 "name": "pt2", 00:27:07.177 "base_bdev_name": "malloc2" 00:27:07.177 } 00:27:07.177 } 00:27:07.177 }' 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:07.177 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:07.436 "name": "pt3", 00:27:07.436 "aliases": [ 00:27:07.436 "00000000-0000-0000-0000-000000000003" 00:27:07.436 ], 00:27:07.436 "product_name": "passthru", 00:27:07.436 "block_size": 512, 00:27:07.436 "num_blocks": 65536, 00:27:07.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:07.436 "assigned_rate_limits": { 00:27:07.436 "rw_ios_per_sec": 0, 00:27:07.436 "rw_mbytes_per_sec": 0, 00:27:07.436 "r_mbytes_per_sec": 0, 00:27:07.436 "w_mbytes_per_sec": 0 00:27:07.436 }, 00:27:07.436 "claimed": true, 00:27:07.436 "claim_type": "exclusive_write", 00:27:07.436 "zoned": false, 00:27:07.436 "supported_io_types": { 00:27:07.436 "read": true, 00:27:07.436 "write": true, 00:27:07.436 "unmap": true, 00:27:07.436 "flush": true, 00:27:07.436 "reset": true, 00:27:07.436 "nvme_admin": false, 00:27:07.436 "nvme_io": false, 00:27:07.436 "nvme_io_md": false, 00:27:07.436 "write_zeroes": true, 00:27:07.436 "zcopy": true, 00:27:07.436 "get_zone_info": false, 00:27:07.436 "zone_management": false, 00:27:07.436 "zone_append": false, 00:27:07.436 "compare": false, 00:27:07.436 "compare_and_write": false, 00:27:07.436 "abort": true, 00:27:07.436 "seek_hole": false, 00:27:07.436 "seek_data": false, 00:27:07.436 "copy": true, 00:27:07.436 "nvme_iov_md": false 00:27:07.436 }, 00:27:07.436 "memory_domains": [ 00:27:07.436 { 00:27:07.436 "dma_device_id": "system", 00:27:07.436 "dma_device_type": 1 00:27:07.436 }, 00:27:07.436 { 00:27:07.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.436 "dma_device_type": 2 00:27:07.436 } 00:27:07.436 ], 00:27:07.436 "driver_specific": { 00:27:07.436 "passthru": { 00:27:07.436 "name": "pt3", 00:27:07.436 "base_bdev_name": "malloc3" 00:27:07.436 } 00:27:07.436 } 00:27:07.436 }' 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:07.436 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:07.697 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:07.961 "name": "pt4", 00:27:07.961 "aliases": [ 00:27:07.961 "00000000-0000-0000-0000-000000000004" 00:27:07.961 ], 00:27:07.961 "product_name": "passthru", 00:27:07.961 "block_size": 512, 00:27:07.961 "num_blocks": 65536, 00:27:07.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:07.961 "assigned_rate_limits": { 00:27:07.961 "rw_ios_per_sec": 0, 00:27:07.961 "rw_mbytes_per_sec": 0, 00:27:07.961 "r_mbytes_per_sec": 0, 00:27:07.961 "w_mbytes_per_sec": 0 00:27:07.961 }, 00:27:07.961 "claimed": true, 00:27:07.961 "claim_type": "exclusive_write", 00:27:07.961 "zoned": false, 00:27:07.961 "supported_io_types": { 00:27:07.961 "read": true, 00:27:07.961 "write": true, 00:27:07.961 "unmap": true, 00:27:07.961 "flush": true, 00:27:07.961 "reset": true, 00:27:07.961 "nvme_admin": false, 00:27:07.961 "nvme_io": false, 00:27:07.961 "nvme_io_md": false, 00:27:07.961 "write_zeroes": true, 00:27:07.961 "zcopy": true, 00:27:07.961 "get_zone_info": false, 00:27:07.961 "zone_management": false, 00:27:07.961 "zone_append": false, 00:27:07.961 "compare": false, 00:27:07.961 "compare_and_write": false, 00:27:07.961 "abort": true, 00:27:07.961 "seek_hole": false, 00:27:07.961 "seek_data": false, 00:27:07.961 "copy": true, 00:27:07.961 "nvme_iov_md": false 00:27:07.961 }, 00:27:07.961 "memory_domains": [ 00:27:07.961 { 00:27:07.961 "dma_device_id": "system", 00:27:07.961 "dma_device_type": 1 00:27:07.961 }, 00:27:07.961 { 00:27:07.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.961 "dma_device_type": 2 00:27:07.961 } 00:27:07.961 ], 00:27:07.961 "driver_specific": { 00:27:07.961 "passthru": { 00:27:07.961 "name": "pt4", 00:27:07.961 "base_bdev_name": "malloc4" 00:27:07.961 } 00:27:07.961 } 00:27:07.961 }' 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:07.961 09:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:08.225 [2024-07-15 09:53:36.118043] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.225 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 158f5884-4290-11ef-a0af-c98d8ee52a94 '!=' 158f5884-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:08.225 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:08.225 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:08.225 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:08.225 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:08.490 [2024-07-15 09:53:36.337988] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.490 "name": "raid_bdev1", 00:27:08.490 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:08.490 "strip_size_kb": 0, 00:27:08.490 "state": "online", 00:27:08.490 "raid_level": "raid1", 00:27:08.490 "superblock": true, 00:27:08.490 "num_base_bdevs": 4, 00:27:08.490 "num_base_bdevs_discovered": 3, 00:27:08.490 "num_base_bdevs_operational": 3, 00:27:08.490 "base_bdevs_list": [ 00:27:08.490 { 00:27:08.490 "name": null, 00:27:08.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.490 "is_configured": false, 00:27:08.490 "data_offset": 2048, 00:27:08.490 "data_size": 63488 00:27:08.490 }, 00:27:08.490 { 00:27:08.490 "name": "pt2", 00:27:08.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.490 "is_configured": true, 00:27:08.490 "data_offset": 2048, 00:27:08.490 "data_size": 63488 00:27:08.490 }, 00:27:08.490 { 00:27:08.490 "name": "pt3", 00:27:08.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:08.490 "is_configured": true, 00:27:08.490 "data_offset": 2048, 00:27:08.490 "data_size": 63488 00:27:08.490 }, 00:27:08.490 { 00:27:08.490 "name": "pt4", 00:27:08.490 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:08.490 "is_configured": true, 00:27:08.490 "data_offset": 2048, 00:27:08.490 "data_size": 63488 00:27:08.490 } 00:27:08.490 ] 00:27:08.490 }' 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.490 09:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.059 09:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:09.059 [2024-07-15 09:53:37.078018] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:09.059 [2024-07-15 09:53:37.078043] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:09.059 [2024-07-15 09:53:37.078054] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:09.059 [2024-07-15 09:53:37.078070] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:09.059 [2024-07-15 09:53:37.078074] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0239834c80 name raid_bdev1, state offline 00:27:09.059 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.059 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:09.318 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:09.318 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:09.318 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:09.318 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:09.318 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:09.577 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:09.577 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:09.577 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:09.836 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:09.836 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:09.836 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:10.095 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:10.095 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:10.095 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:10.095 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:10.095 09:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:10.095 [2024-07-15 09:53:38.162103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:10.095 [2024-07-15 09:53:38.162164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.095 [2024-07-15 09:53:38.162173] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835900 00:27:10.095 [2024-07-15 09:53:38.162180] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.095 [2024-07-15 09:53:38.162964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.095 [2024-07-15 09:53:38.162986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:10.095 [2024-07-15 09:53:38.163008] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:10.095 [2024-07-15 09:53:38.163020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:10.095 pt2 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.095 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.355 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.355 "name": "raid_bdev1", 00:27:10.355 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:10.355 "strip_size_kb": 0, 00:27:10.355 "state": "configuring", 00:27:10.355 "raid_level": "raid1", 00:27:10.355 "superblock": true, 00:27:10.355 "num_base_bdevs": 4, 00:27:10.355 "num_base_bdevs_discovered": 1, 00:27:10.355 "num_base_bdevs_operational": 3, 00:27:10.355 "base_bdevs_list": [ 00:27:10.355 { 00:27:10.355 "name": null, 00:27:10.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.355 "is_configured": false, 00:27:10.355 "data_offset": 2048, 00:27:10.355 "data_size": 63488 00:27:10.355 }, 00:27:10.355 { 00:27:10.355 "name": "pt2", 00:27:10.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:10.355 "is_configured": true, 00:27:10.355 "data_offset": 2048, 00:27:10.355 "data_size": 63488 00:27:10.355 }, 00:27:10.355 { 00:27:10.355 "name": null, 00:27:10.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:10.355 "is_configured": false, 00:27:10.355 "data_offset": 2048, 00:27:10.355 "data_size": 63488 00:27:10.355 }, 00:27:10.355 { 00:27:10.355 "name": null, 00:27:10.355 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:10.355 "is_configured": false, 00:27:10.355 "data_offset": 2048, 00:27:10.355 "data_size": 63488 00:27:10.355 } 00:27:10.355 ] 00:27:10.355 }' 00:27:10.355 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.355 09:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.614 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:10.614 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:10.614 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:10.873 [2024-07-15 09:53:38.890176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:10.874 [2024-07-15 09:53:38.890237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.874 [2024-07-15 09:53:38.890246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835680 00:27:10.874 [2024-07-15 09:53:38.890253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.874 [2024-07-15 09:53:38.890354] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.874 [2024-07-15 09:53:38.890362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:10.874 [2024-07-15 09:53:38.890378] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:10.874 [2024-07-15 09:53:38.890385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:10.874 pt3 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.874 09:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.134 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.134 "name": "raid_bdev1", 00:27:11.134 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:11.134 "strip_size_kb": 0, 00:27:11.134 "state": "configuring", 00:27:11.134 "raid_level": "raid1", 00:27:11.134 "superblock": true, 00:27:11.134 "num_base_bdevs": 4, 00:27:11.134 "num_base_bdevs_discovered": 2, 00:27:11.134 "num_base_bdevs_operational": 3, 00:27:11.134 "base_bdevs_list": [ 00:27:11.134 { 00:27:11.134 "name": null, 00:27:11.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.134 "is_configured": false, 00:27:11.134 "data_offset": 2048, 00:27:11.134 "data_size": 63488 00:27:11.134 }, 00:27:11.134 { 00:27:11.134 "name": "pt2", 00:27:11.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.134 "is_configured": true, 00:27:11.134 "data_offset": 2048, 00:27:11.134 "data_size": 63488 00:27:11.134 }, 00:27:11.134 { 00:27:11.134 "name": "pt3", 00:27:11.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:11.134 "is_configured": true, 00:27:11.134 "data_offset": 2048, 00:27:11.134 "data_size": 63488 00:27:11.134 }, 00:27:11.134 { 00:27:11.134 "name": null, 00:27:11.134 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:11.134 "is_configured": false, 00:27:11.134 "data_offset": 2048, 00:27:11.134 "data_size": 63488 00:27:11.134 } 00:27:11.134 ] 00:27:11.134 }' 00:27:11.134 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.134 09:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.391 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:11.391 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:11.391 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:27:11.391 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:11.649 [2024-07-15 09:53:39.582236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:11.649 [2024-07-15 09:53:39.582288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.649 [2024-07-15 09:53:39.582298] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239834c80 00:27:11.649 [2024-07-15 09:53:39.582304] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.649 [2024-07-15 09:53:39.582395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.649 [2024-07-15 09:53:39.582403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:11.649 [2024-07-15 09:53:39.582417] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:11.649 [2024-07-15 09:53:39.582423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:11.649 [2024-07-15 09:53:39.582446] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0239834780 00:27:11.649 [2024-07-15 09:53:39.582450] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:11.649 [2024-07-15 09:53:39.582466] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0239897e20 00:27:11.649 [2024-07-15 09:53:39.582502] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0239834780 00:27:11.649 [2024-07-15 09:53:39.582506] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb0239834780 00:27:11.649 [2024-07-15 09:53:39.582521] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.649 pt4 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.649 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.907 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.907 "name": "raid_bdev1", 00:27:11.907 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:11.907 "strip_size_kb": 0, 00:27:11.907 "state": "online", 00:27:11.907 "raid_level": "raid1", 00:27:11.907 "superblock": true, 00:27:11.907 "num_base_bdevs": 4, 00:27:11.907 "num_base_bdevs_discovered": 3, 00:27:11.907 "num_base_bdevs_operational": 3, 00:27:11.907 "base_bdevs_list": [ 00:27:11.907 { 00:27:11.907 "name": null, 00:27:11.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.907 "is_configured": false, 00:27:11.907 "data_offset": 2048, 00:27:11.907 "data_size": 63488 00:27:11.907 }, 00:27:11.907 { 00:27:11.907 "name": "pt2", 00:27:11.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.907 "is_configured": true, 00:27:11.907 "data_offset": 2048, 00:27:11.907 "data_size": 63488 00:27:11.907 }, 00:27:11.907 { 00:27:11.907 "name": "pt3", 00:27:11.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:11.907 "is_configured": true, 00:27:11.907 "data_offset": 2048, 00:27:11.907 "data_size": 63488 00:27:11.907 }, 00:27:11.907 { 00:27:11.907 "name": "pt4", 00:27:11.907 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:11.907 "is_configured": true, 00:27:11.907 "data_offset": 2048, 00:27:11.907 "data_size": 63488 00:27:11.907 } 00:27:11.907 ] 00:27:11.907 }' 00:27:11.907 09:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.907 09:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.186 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:12.186 [2024-07-15 09:53:40.278268] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.186 [2024-07-15 09:53:40.278297] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.186 [2024-07-15 09:53:40.278312] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.186 [2024-07-15 09:53:40.278328] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.186 [2024-07-15 09:53:40.278332] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0239834780 name raid_bdev1, state offline 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:27:12.443 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:12.701 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:12.958 [2024-07-15 09:53:40.886293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:12.958 [2024-07-15 09:53:40.886343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.958 [2024-07-15 09:53:40.886352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239834c80 00:27:12.958 [2024-07-15 09:53:40.886359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.958 [2024-07-15 09:53:40.887125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.958 [2024-07-15 09:53:40.887151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:12.958 [2024-07-15 09:53:40.887171] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:12.958 [2024-07-15 09:53:40.887182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:12.958 [2024-07-15 09:53:40.887207] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:12.958 [2024-07-15 09:53:40.887211] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.958 [2024-07-15 09:53:40.887216] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0239834780 name raid_bdev1, state configuring 00:27:12.958 [2024-07-15 09:53:40.887223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:12.958 [2024-07-15 09:53:40.887238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:12.958 pt1 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.958 09:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.215 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.215 "name": "raid_bdev1", 00:27:13.215 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:13.215 "strip_size_kb": 0, 00:27:13.215 "state": "configuring", 00:27:13.215 "raid_level": "raid1", 00:27:13.215 "superblock": true, 00:27:13.215 "num_base_bdevs": 4, 00:27:13.215 "num_base_bdevs_discovered": 2, 00:27:13.215 "num_base_bdevs_operational": 3, 00:27:13.215 "base_bdevs_list": [ 00:27:13.215 { 00:27:13.215 "name": null, 00:27:13.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.215 "is_configured": false, 00:27:13.215 "data_offset": 2048, 00:27:13.215 "data_size": 63488 00:27:13.215 }, 00:27:13.215 { 00:27:13.215 "name": "pt2", 00:27:13.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.215 "is_configured": true, 00:27:13.215 "data_offset": 2048, 00:27:13.215 "data_size": 63488 00:27:13.215 }, 00:27:13.215 { 00:27:13.215 "name": "pt3", 00:27:13.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:13.215 "is_configured": true, 00:27:13.215 "data_offset": 2048, 00:27:13.215 "data_size": 63488 00:27:13.215 }, 00:27:13.215 { 00:27:13.215 "name": null, 00:27:13.215 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:13.215 "is_configured": false, 00:27:13.215 "data_offset": 2048, 00:27:13.215 "data_size": 63488 00:27:13.215 } 00:27:13.215 ] 00:27:13.215 }' 00:27:13.215 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.215 09:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.473 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:27:13.473 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:13.730 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:27:13.730 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:13.730 [2024-07-15 09:53:41.770354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:13.730 [2024-07-15 09:53:41.770401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.730 [2024-07-15 09:53:41.770410] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0239835180 00:27:13.730 [2024-07-15 09:53:41.770416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.730 [2024-07-15 09:53:41.770514] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.730 [2024-07-15 09:53:41.770521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:13.730 [2024-07-15 09:53:41.770536] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:13.730 [2024-07-15 09:53:41.770542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:13.730 [2024-07-15 09:53:41.770566] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0239834780 00:27:13.730 [2024-07-15 09:53:41.770569] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:13.730 [2024-07-15 09:53:41.770586] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0239897e20 00:27:13.730 [2024-07-15 09:53:41.770621] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0239834780 00:27:13.730 [2024-07-15 09:53:41.770624] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb0239834780 00:27:13.730 [2024-07-15 09:53:41.770638] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.730 pt4 00:27:13.730 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:13.730 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.730 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.731 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.989 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.989 "name": "raid_bdev1", 00:27:13.989 "uuid": "158f5884-4290-11ef-a0af-c98d8ee52a94", 00:27:13.989 "strip_size_kb": 0, 00:27:13.989 "state": "online", 00:27:13.989 "raid_level": "raid1", 00:27:13.989 "superblock": true, 00:27:13.989 "num_base_bdevs": 4, 00:27:13.989 "num_base_bdevs_discovered": 3, 00:27:13.989 "num_base_bdevs_operational": 3, 00:27:13.989 "base_bdevs_list": [ 00:27:13.989 { 00:27:13.989 "name": null, 00:27:13.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.989 "is_configured": false, 00:27:13.989 "data_offset": 2048, 00:27:13.989 "data_size": 63488 00:27:13.989 }, 00:27:13.989 { 00:27:13.989 "name": "pt2", 00:27:13.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.989 "is_configured": true, 00:27:13.989 "data_offset": 2048, 00:27:13.989 "data_size": 63488 00:27:13.989 }, 00:27:13.989 { 00:27:13.989 "name": "pt3", 00:27:13.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:13.989 "is_configured": true, 00:27:13.989 "data_offset": 2048, 00:27:13.989 "data_size": 63488 00:27:13.989 }, 00:27:13.989 { 00:27:13.989 "name": "pt4", 00:27:13.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:13.989 "is_configured": true, 00:27:13.989 "data_offset": 2048, 00:27:13.989 "data_size": 63488 00:27:13.989 } 00:27:13.989 ] 00:27:13.989 }' 00:27:13.989 09:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.989 09:53:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.246 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:14.246 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:14.503 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:14.503 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:14.503 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:14.762 [2024-07-15 09:53:42.626427] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 158f5884-4290-11ef-a0af-c98d8ee52a94 '!=' 158f5884-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64376 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64376 ']' 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64376 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64376 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:27:14.762 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:27:14.762 killing process with pid 64376 00:27:14.763 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64376' 00:27:14.763 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64376 00:27:14.763 [2024-07-15 09:53:42.655235] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:14.763 [2024-07-15 09:53:42.655249] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:14.763 [2024-07-15 09:53:42.655273] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:14.763 [2024-07-15 09:53:42.655277] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0239834780 name raid_bdev1, state offline 00:27:14.763 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64376 00:27:14.763 [2024-07-15 09:53:42.689824] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.033 09:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:15.033 00:27:15.033 real 0m18.178s 00:27:15.033 user 0m32.182s 00:27:15.033 sys 0m3.370s 00:27:15.033 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.033 ************************************ 00:27:15.033 END TEST raid_superblock_test 00:27:15.033 09:53:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.033 ************************************ 00:27:15.033 09:53:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:15.033 09:53:42 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:27:15.033 09:53:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:15.033 09:53:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.033 09:53:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:15.033 ************************************ 00:27:15.033 START TEST raid_read_error_test 00:27:15.033 ************************************ 00:27:15.033 09:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:27:15.033 09:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:15.033 09:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:15.033 09:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.RCQzCNbWB9 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65000 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65000 /var/tmp/spdk-raid.sock 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 65000 ']' 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.033 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.033 [2024-07-15 09:53:43.025916] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:27:15.033 [2024-07-15 09:53:43.026206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:27:15.991 EAL: TSC is not safe to use in SMP mode 00:27:15.991 EAL: TSC is not invariant 00:27:15.991 [2024-07-15 09:53:43.767271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.991 [2024-07-15 09:53:43.880622] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:15.991 [2024-07-15 09:53:43.883013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.991 [2024-07-15 09:53:43.883700] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.991 [2024-07-15 09:53:43.883705] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.991 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.991 09:53:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:15.991 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:15.991 09:53:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:16.250 BaseBdev1_malloc 00:27:16.250 09:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:16.250 true 00:27:16.250 09:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:16.508 [2024-07-15 09:53:44.534520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:16.508 [2024-07-15 09:53:44.534584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.508 [2024-07-15 09:53:44.534612] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9be7234780 00:27:16.508 [2024-07-15 09:53:44.534618] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.508 [2024-07-15 09:53:44.535288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.508 [2024-07-15 09:53:44.535316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:16.508 BaseBdev1 00:27:16.508 09:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:16.508 09:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:16.768 BaseBdev2_malloc 00:27:16.768 09:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:17.027 true 00:27:17.027 09:53:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:17.286 [2024-07-15 09:53:45.219546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:17.286 [2024-07-15 09:53:45.219625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.286 [2024-07-15 09:53:45.219657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9be7234c80 00:27:17.286 [2024-07-15 09:53:45.219664] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.286 [2024-07-15 09:53:45.220558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.286 [2024-07-15 09:53:45.220601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:17.286 BaseBdev2 00:27:17.286 09:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:17.286 09:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:17.549 BaseBdev3_malloc 00:27:17.549 09:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:17.549 true 00:27:17.549 09:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:17.808 [2024-07-15 09:53:45.811570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:17.809 [2024-07-15 09:53:45.811633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.809 [2024-07-15 09:53:45.811663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9be7235180 00:27:17.809 [2024-07-15 09:53:45.811670] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.809 [2024-07-15 09:53:45.812380] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.809 [2024-07-15 09:53:45.812409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:17.809 BaseBdev3 00:27:17.809 09:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:17.809 09:53:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:18.068 BaseBdev4_malloc 00:27:18.068 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:18.327 true 00:27:18.327 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:18.586 [2024-07-15 09:53:46.447613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:18.586 [2024-07-15 09:53:46.447677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.586 [2024-07-15 09:53:46.447709] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9be7235680 00:27:18.586 [2024-07-15 09:53:46.447716] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.587 [2024-07-15 09:53:46.448378] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.587 [2024-07-15 09:53:46.448413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:18.587 BaseBdev4 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:18.587 [2024-07-15 09:53:46.635624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:18.587 [2024-07-15 09:53:46.636225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:18.587 [2024-07-15 09:53:46.636257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:18.587 [2024-07-15 09:53:46.636270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:18.587 [2024-07-15 09:53:46.636344] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f9be7235900 00:27:18.587 [2024-07-15 09:53:46.636350] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:18.587 [2024-07-15 09:53:46.636388] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f9be72a0e20 00:27:18.587 [2024-07-15 09:53:46.636464] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f9be7235900 00:27:18.587 [2024-07-15 09:53:46.636467] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f9be7235900 00:27:18.587 [2024-07-15 09:53:46.636486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.587 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.846 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.846 "name": "raid_bdev1", 00:27:18.846 "uuid": "20e4add4-4290-11ef-a0af-c98d8ee52a94", 00:27:18.846 "strip_size_kb": 0, 00:27:18.846 "state": "online", 00:27:18.846 "raid_level": "raid1", 00:27:18.846 "superblock": true, 00:27:18.846 "num_base_bdevs": 4, 00:27:18.846 "num_base_bdevs_discovered": 4, 00:27:18.846 "num_base_bdevs_operational": 4, 00:27:18.846 "base_bdevs_list": [ 00:27:18.846 { 00:27:18.846 "name": "BaseBdev1", 00:27:18.846 "uuid": "c5e82abc-22fb-555a-bb2b-41f34df141d6", 00:27:18.846 "is_configured": true, 00:27:18.846 "data_offset": 2048, 00:27:18.846 "data_size": 63488 00:27:18.846 }, 00:27:18.846 { 00:27:18.846 "name": "BaseBdev2", 00:27:18.846 "uuid": "e854a5ab-4a77-9a5b-9065-b4506f4fb492", 00:27:18.846 "is_configured": true, 00:27:18.846 "data_offset": 2048, 00:27:18.846 "data_size": 63488 00:27:18.846 }, 00:27:18.846 { 00:27:18.846 "name": "BaseBdev3", 00:27:18.846 "uuid": "cf02ee2f-c92f-325d-b4b7-5061e27f220a", 00:27:18.846 "is_configured": true, 00:27:18.846 "data_offset": 2048, 00:27:18.846 "data_size": 63488 00:27:18.846 }, 00:27:18.846 { 00:27:18.846 "name": "BaseBdev4", 00:27:18.846 "uuid": "57fda4e0-195f-2452-a661-432877f3d7e4", 00:27:18.846 "is_configured": true, 00:27:18.846 "data_offset": 2048, 00:27:18.846 "data_size": 63488 00:27:18.846 } 00:27:18.846 ] 00:27:18.846 }' 00:27:18.846 09:53:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.846 09:53:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.415 09:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:19.415 09:53:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:19.415 [2024-07-15 09:53:47.343732] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f9be72a0ec0 00:27:20.351 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.610 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.869 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.869 "name": "raid_bdev1", 00:27:20.869 "uuid": "20e4add4-4290-11ef-a0af-c98d8ee52a94", 00:27:20.869 "strip_size_kb": 0, 00:27:20.869 "state": "online", 00:27:20.869 "raid_level": "raid1", 00:27:20.869 "superblock": true, 00:27:20.869 "num_base_bdevs": 4, 00:27:20.869 "num_base_bdevs_discovered": 4, 00:27:20.869 "num_base_bdevs_operational": 4, 00:27:20.869 "base_bdevs_list": [ 00:27:20.869 { 00:27:20.869 "name": "BaseBdev1", 00:27:20.869 "uuid": "c5e82abc-22fb-555a-bb2b-41f34df141d6", 00:27:20.869 "is_configured": true, 00:27:20.869 "data_offset": 2048, 00:27:20.869 "data_size": 63488 00:27:20.869 }, 00:27:20.869 { 00:27:20.869 "name": "BaseBdev2", 00:27:20.869 "uuid": "e854a5ab-4a77-9a5b-9065-b4506f4fb492", 00:27:20.869 "is_configured": true, 00:27:20.869 "data_offset": 2048, 00:27:20.869 "data_size": 63488 00:27:20.869 }, 00:27:20.869 { 00:27:20.869 "name": "BaseBdev3", 00:27:20.869 "uuid": "cf02ee2f-c92f-325d-b4b7-5061e27f220a", 00:27:20.869 "is_configured": true, 00:27:20.869 "data_offset": 2048, 00:27:20.869 "data_size": 63488 00:27:20.869 }, 00:27:20.869 { 00:27:20.869 "name": "BaseBdev4", 00:27:20.869 "uuid": "57fda4e0-195f-2452-a661-432877f3d7e4", 00:27:20.869 "is_configured": true, 00:27:20.869 "data_offset": 2048, 00:27:20.869 "data_size": 63488 00:27:20.869 } 00:27:20.869 ] 00:27:20.869 }' 00:27:20.869 09:53:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.869 09:53:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.128 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:21.388 [2024-07-15 09:53:49.324919] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:21.388 [2024-07-15 09:53:49.324959] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:21.388 [2024-07-15 09:53:49.325391] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:21.388 [2024-07-15 09:53:49.325402] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:21.388 [2024-07-15 09:53:49.325424] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:21.388 [2024-07-15 09:53:49.325429] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f9be7235900 name raid_bdev1, state offline 00:27:21.388 0 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65000 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 65000 ']' 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 65000 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65000 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:27:21.388 killing process with pid 65000 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65000' 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 65000 00:27:21.388 [2024-07-15 09:53:49.360799] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:21.388 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 65000 00:27:21.388 [2024-07-15 09:53:49.397016] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.RCQzCNbWB9 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:21.647 00:27:21.647 real 0m6.667s 00:27:21.647 user 0m10.085s 00:27:21.647 sys 0m1.388s 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.647 ************************************ 00:27:21.647 END TEST raid_read_error_test 00:27:21.647 ************************************ 00:27:21.647 09:53:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.647 09:53:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:21.647 09:53:49 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:27:21.647 09:53:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:21.647 09:53:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.647 09:53:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.647 ************************************ 00:27:21.647 START TEST raid_write_error_test 00:27:21.647 ************************************ 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dD4NYM0osY 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65138 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65138 /var/tmp/spdk-raid.sock 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 65138 ']' 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.647 09:53:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.647 [2024-07-15 09:53:49.746020] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:27:21.647 [2024-07-15 09:53:49.746346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:27:22.586 EAL: TSC is not safe to use in SMP mode 00:27:22.586 EAL: TSC is not invariant 00:27:22.586 [2024-07-15 09:53:50.472723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.586 [2024-07-15 09:53:50.586512] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:22.586 [2024-07-15 09:53:50.589022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.586 [2024-07-15 09:53:50.589747] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.586 [2024-07-15 09:53:50.589759] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.845 09:53:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.845 09:53:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:22.845 09:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:22.845 09:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:22.845 BaseBdev1_malloc 00:27:22.845 09:53:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:23.104 true 00:27:23.104 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:23.401 [2024-07-15 09:53:51.280978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:23.401 [2024-07-15 09:53:51.281048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.401 [2024-07-15 09:53:51.281079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x317dbce34780 00:27:23.401 [2024-07-15 09:53:51.281086] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.401 [2024-07-15 09:53:51.281804] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.401 [2024-07-15 09:53:51.281850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:23.401 BaseBdev1 00:27:23.401 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:23.401 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:23.661 BaseBdev2_malloc 00:27:23.661 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:23.661 true 00:27:23.661 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:23.920 [2024-07-15 09:53:51.933073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:23.920 [2024-07-15 09:53:51.933138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.920 [2024-07-15 09:53:51.933166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x317dbce34c80 00:27:23.920 [2024-07-15 09:53:51.933173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.920 [2024-07-15 09:53:51.933857] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.920 [2024-07-15 09:53:51.933886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:23.920 BaseBdev2 00:27:23.920 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:23.920 09:53:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:24.179 BaseBdev3_malloc 00:27:24.179 09:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:24.437 true 00:27:24.437 09:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:24.695 [2024-07-15 09:53:52.553106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:24.695 [2024-07-15 09:53:52.553161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.695 [2024-07-15 09:53:52.553186] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x317dbce35180 00:27:24.695 [2024-07-15 09:53:52.553193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.695 [2024-07-15 09:53:52.553860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.695 [2024-07-15 09:53:52.553889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:24.695 BaseBdev3 00:27:24.695 09:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:24.695 09:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:24.695 BaseBdev4_malloc 00:27:24.695 09:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:24.953 true 00:27:24.953 09:53:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:25.211 [2024-07-15 09:53:53.141150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:25.211 [2024-07-15 09:53:53.141212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.211 [2024-07-15 09:53:53.141237] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x317dbce35680 00:27:25.211 [2024-07-15 09:53:53.141244] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.211 [2024-07-15 09:53:53.141912] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.211 [2024-07-15 09:53:53.141943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:25.211 BaseBdev4 00:27:25.211 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:25.469 [2024-07-15 09:53:53.337166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:25.469 [2024-07-15 09:53:53.337779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:25.469 [2024-07-15 09:53:53.337800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:25.469 [2024-07-15 09:53:53.337814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:25.469 [2024-07-15 09:53:53.337881] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x317dbce35900 00:27:25.469 [2024-07-15 09:53:53.337887] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:25.469 [2024-07-15 09:53:53.337925] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x317dbcea0e20 00:27:25.469 [2024-07-15 09:53:53.338000] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x317dbce35900 00:27:25.469 [2024-07-15 09:53:53.338003] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x317dbce35900 00:27:25.469 [2024-07-15 09:53:53.338021] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.469 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:25.469 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.469 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.469 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.470 "name": "raid_bdev1", 00:27:25.470 "uuid": "24e340bd-4290-11ef-a0af-c98d8ee52a94", 00:27:25.470 "strip_size_kb": 0, 00:27:25.470 "state": "online", 00:27:25.470 "raid_level": "raid1", 00:27:25.470 "superblock": true, 00:27:25.470 "num_base_bdevs": 4, 00:27:25.470 "num_base_bdevs_discovered": 4, 00:27:25.470 "num_base_bdevs_operational": 4, 00:27:25.470 "base_bdevs_list": [ 00:27:25.470 { 00:27:25.470 "name": "BaseBdev1", 00:27:25.470 "uuid": "0d942311-4148-f75c-ab35-5a6fdcd70fb1", 00:27:25.470 "is_configured": true, 00:27:25.470 "data_offset": 2048, 00:27:25.470 "data_size": 63488 00:27:25.470 }, 00:27:25.470 { 00:27:25.470 "name": "BaseBdev2", 00:27:25.470 "uuid": "8869ea74-6722-2d5c-bd6a-a38f45a7ba83", 00:27:25.470 "is_configured": true, 00:27:25.470 "data_offset": 2048, 00:27:25.470 "data_size": 63488 00:27:25.470 }, 00:27:25.470 { 00:27:25.470 "name": "BaseBdev3", 00:27:25.470 "uuid": "01dc133c-65b5-e254-a3f0-fdb080929d4b", 00:27:25.470 "is_configured": true, 00:27:25.470 "data_offset": 2048, 00:27:25.470 "data_size": 63488 00:27:25.470 }, 00:27:25.470 { 00:27:25.470 "name": "BaseBdev4", 00:27:25.470 "uuid": "18b53b43-eefe-4759-8a0a-4c68ce7500c9", 00:27:25.470 "is_configured": true, 00:27:25.470 "data_offset": 2048, 00:27:25.470 "data_size": 63488 00:27:25.470 } 00:27:25.470 ] 00:27:25.470 }' 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.470 09:53:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.036 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:26.036 09:53:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:26.036 [2024-07-15 09:53:53.965284] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x317dbcea0ec0 00:27:26.970 09:53:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:27.228 [2024-07-15 09:53:55.105108] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:27.228 [2024-07-15 09:53:55.105179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:27.228 [2024-07-15 09:53:55.105315] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x317dbcea0ec0 00:27:27.228 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.229 "name": "raid_bdev1", 00:27:27.229 "uuid": "24e340bd-4290-11ef-a0af-c98d8ee52a94", 00:27:27.229 "strip_size_kb": 0, 00:27:27.229 "state": "online", 00:27:27.229 "raid_level": "raid1", 00:27:27.229 "superblock": true, 00:27:27.229 "num_base_bdevs": 4, 00:27:27.229 "num_base_bdevs_discovered": 3, 00:27:27.229 "num_base_bdevs_operational": 3, 00:27:27.229 "base_bdevs_list": [ 00:27:27.229 { 00:27:27.229 "name": null, 00:27:27.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.229 "is_configured": false, 00:27:27.229 "data_offset": 2048, 00:27:27.229 "data_size": 63488 00:27:27.229 }, 00:27:27.229 { 00:27:27.229 "name": "BaseBdev2", 00:27:27.229 "uuid": "8869ea74-6722-2d5c-bd6a-a38f45a7ba83", 00:27:27.229 "is_configured": true, 00:27:27.229 "data_offset": 2048, 00:27:27.229 "data_size": 63488 00:27:27.229 }, 00:27:27.229 { 00:27:27.229 "name": "BaseBdev3", 00:27:27.229 "uuid": "01dc133c-65b5-e254-a3f0-fdb080929d4b", 00:27:27.229 "is_configured": true, 00:27:27.229 "data_offset": 2048, 00:27:27.229 "data_size": 63488 00:27:27.229 }, 00:27:27.229 { 00:27:27.229 "name": "BaseBdev4", 00:27:27.229 "uuid": "18b53b43-eefe-4759-8a0a-4c68ce7500c9", 00:27:27.229 "is_configured": true, 00:27:27.229 "data_offset": 2048, 00:27:27.229 "data_size": 63488 00:27:27.229 } 00:27:27.229 ] 00:27:27.229 }' 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.229 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.487 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:27.744 [2024-07-15 09:53:55.767689] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:27.744 [2024-07-15 09:53:55.767717] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:27.744 [2024-07-15 09:53:55.768138] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:27.744 [2024-07-15 09:53:55.768148] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.744 [2024-07-15 09:53:55.768169] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:27.744 [2024-07-15 09:53:55.768174] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x317dbce35900 name raid_bdev1, state offline 00:27:27.744 0 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65138 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 65138 ']' 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 65138 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65138 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:27:27.744 killing process with pid 65138 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65138' 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 65138 00:27:27.744 [2024-07-15 09:53:55.799579] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:27.744 09:53:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 65138 00:27:27.744 [2024-07-15 09:53:55.834185] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dD4NYM0osY 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:28.003 00:27:28.003 real 0m6.379s 00:27:28.003 user 0m9.596s 00:27:28.003 sys 0m1.326s 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.003 09:53:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.003 ************************************ 00:27:28.003 END TEST raid_write_error_test 00:27:28.003 ************************************ 00:27:28.262 09:53:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:28.262 09:53:56 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:27:28.262 09:53:56 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:27:28.262 09:53:56 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:27:28.262 09:53:56 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:27:28.262 09:53:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:28.262 09:53:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.262 09:53:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:28.262 ************************************ 00:27:28.262 START TEST raid_state_function_test_sb_4k 00:27:28.262 ************************************ 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65270 00:27:28.262 Process raid pid: 65270 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65270' 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65270 /var/tmp/spdk-raid.sock 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65270 ']' 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.262 09:53:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.262 [2024-07-15 09:53:56.177631] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:27:28.262 [2024-07-15 09:53:56.177934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:27:28.521 EAL: TSC is not safe to use in SMP mode 00:27:28.521 EAL: TSC is not invariant 00:27:28.521 [2024-07-15 09:53:56.620568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.779 [2024-07-15 09:53:56.735217] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:28.779 [2024-07-15 09:53:56.737674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.779 [2024-07-15 09:53:56.738369] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:28.779 [2024-07-15 09:53:56.738381] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:29.037 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.037 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:27:29.037 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:29.295 [2024-07-15 09:53:57.285439] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:29.295 [2024-07-15 09:53:57.285519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:29.295 [2024-07-15 09:53:57.285524] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:29.295 [2024-07-15 09:53:57.285530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.295 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.553 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:29.553 "name": "Existed_Raid", 00:27:29.553 "uuid": "273db61a-4290-11ef-a0af-c98d8ee52a94", 00:27:29.553 "strip_size_kb": 0, 00:27:29.553 "state": "configuring", 00:27:29.553 "raid_level": "raid1", 00:27:29.553 "superblock": true, 00:27:29.553 "num_base_bdevs": 2, 00:27:29.553 "num_base_bdevs_discovered": 0, 00:27:29.553 "num_base_bdevs_operational": 2, 00:27:29.553 "base_bdevs_list": [ 00:27:29.553 { 00:27:29.553 "name": "BaseBdev1", 00:27:29.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.553 "is_configured": false, 00:27:29.553 "data_offset": 0, 00:27:29.553 "data_size": 0 00:27:29.553 }, 00:27:29.553 { 00:27:29.553 "name": "BaseBdev2", 00:27:29.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.553 "is_configured": false, 00:27:29.553 "data_offset": 0, 00:27:29.553 "data_size": 0 00:27:29.553 } 00:27:29.553 ] 00:27:29.553 }' 00:27:29.553 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:29.553 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.811 09:53:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:30.069 [2024-07-15 09:53:58.033435] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:30.069 [2024-07-15 09:53:58.033464] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33a11e634500 name Existed_Raid, state configuring 00:27:30.069 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:30.326 [2024-07-15 09:53:58.329459] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:30.326 [2024-07-15 09:53:58.329508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:30.326 [2024-07-15 09:53:58.329512] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:30.326 [2024-07-15 09:53:58.329518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:30.326 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:27:30.585 [2024-07-15 09:53:58.514623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:30.585 BaseBdev1 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:30.585 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:30.843 [ 00:27:30.843 { 00:27:30.843 "name": "BaseBdev1", 00:27:30.843 "aliases": [ 00:27:30.843 "27f9185a-4290-11ef-a0af-c98d8ee52a94" 00:27:30.843 ], 00:27:30.843 "product_name": "Malloc disk", 00:27:30.843 "block_size": 4096, 00:27:30.843 "num_blocks": 8192, 00:27:30.843 "uuid": "27f9185a-4290-11ef-a0af-c98d8ee52a94", 00:27:30.843 "assigned_rate_limits": { 00:27:30.843 "rw_ios_per_sec": 0, 00:27:30.843 "rw_mbytes_per_sec": 0, 00:27:30.843 "r_mbytes_per_sec": 0, 00:27:30.843 "w_mbytes_per_sec": 0 00:27:30.843 }, 00:27:30.843 "claimed": true, 00:27:30.843 "claim_type": "exclusive_write", 00:27:30.843 "zoned": false, 00:27:30.843 "supported_io_types": { 00:27:30.843 "read": true, 00:27:30.843 "write": true, 00:27:30.843 "unmap": true, 00:27:30.843 "flush": true, 00:27:30.843 "reset": true, 00:27:30.843 "nvme_admin": false, 00:27:30.843 "nvme_io": false, 00:27:30.843 "nvme_io_md": false, 00:27:30.843 "write_zeroes": true, 00:27:30.843 "zcopy": true, 00:27:30.843 "get_zone_info": false, 00:27:30.843 "zone_management": false, 00:27:30.843 "zone_append": false, 00:27:30.843 "compare": false, 00:27:30.843 "compare_and_write": false, 00:27:30.843 "abort": true, 00:27:30.843 "seek_hole": false, 00:27:30.843 "seek_data": false, 00:27:30.843 "copy": true, 00:27:30.843 "nvme_iov_md": false 00:27:30.843 }, 00:27:30.843 "memory_domains": [ 00:27:30.843 { 00:27:30.843 "dma_device_id": "system", 00:27:30.843 "dma_device_type": 1 00:27:30.843 }, 00:27:30.843 { 00:27:30.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.843 "dma_device_type": 2 00:27:30.843 } 00:27:30.843 ], 00:27:30.843 "driver_specific": {} 00:27:30.843 } 00:27:30.843 ] 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.843 09:53:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:31.101 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.101 "name": "Existed_Raid", 00:27:31.101 "uuid": "27dd0449-4290-11ef-a0af-c98d8ee52a94", 00:27:31.101 "strip_size_kb": 0, 00:27:31.101 "state": "configuring", 00:27:31.101 "raid_level": "raid1", 00:27:31.101 "superblock": true, 00:27:31.101 "num_base_bdevs": 2, 00:27:31.101 "num_base_bdevs_discovered": 1, 00:27:31.101 "num_base_bdevs_operational": 2, 00:27:31.101 "base_bdevs_list": [ 00:27:31.101 { 00:27:31.101 "name": "BaseBdev1", 00:27:31.101 "uuid": "27f9185a-4290-11ef-a0af-c98d8ee52a94", 00:27:31.101 "is_configured": true, 00:27:31.101 "data_offset": 256, 00:27:31.101 "data_size": 7936 00:27:31.101 }, 00:27:31.101 { 00:27:31.101 "name": "BaseBdev2", 00:27:31.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.101 "is_configured": false, 00:27:31.101 "data_offset": 0, 00:27:31.101 "data_size": 0 00:27:31.101 } 00:27:31.101 ] 00:27:31.101 }' 00:27:31.101 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.101 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.357 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:31.613 [2024-07-15 09:53:59.653532] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:31.613 [2024-07-15 09:53:59.653567] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33a11e634500 name Existed_Raid, state configuring 00:27:31.613 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:31.869 [2024-07-15 09:53:59.861549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:31.869 [2024-07-15 09:53:59.862454] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:31.869 [2024-07-15 09:53:59.862504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.869 09:53:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.145 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.145 "name": "Existed_Raid", 00:27:32.145 "uuid": "28c6cb77-4290-11ef-a0af-c98d8ee52a94", 00:27:32.145 "strip_size_kb": 0, 00:27:32.145 "state": "configuring", 00:27:32.145 "raid_level": "raid1", 00:27:32.145 "superblock": true, 00:27:32.145 "num_base_bdevs": 2, 00:27:32.145 "num_base_bdevs_discovered": 1, 00:27:32.145 "num_base_bdevs_operational": 2, 00:27:32.145 "base_bdevs_list": [ 00:27:32.145 { 00:27:32.145 "name": "BaseBdev1", 00:27:32.145 "uuid": "27f9185a-4290-11ef-a0af-c98d8ee52a94", 00:27:32.145 "is_configured": true, 00:27:32.145 "data_offset": 256, 00:27:32.145 "data_size": 7936 00:27:32.145 }, 00:27:32.145 { 00:27:32.145 "name": "BaseBdev2", 00:27:32.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.145 "is_configured": false, 00:27:32.145 "data_offset": 0, 00:27:32.145 "data_size": 0 00:27:32.145 } 00:27:32.145 ] 00:27:32.145 }' 00:27:32.145 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.145 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:32.421 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:27:32.680 [2024-07-15 09:54:00.593723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:32.680 [2024-07-15 09:54:00.593787] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x33a11e634a00 00:27:32.680 [2024-07-15 09:54:00.593792] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:32.680 [2024-07-15 09:54:00.593811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x33a11e697e20 00:27:32.680 [2024-07-15 09:54:00.593850] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x33a11e634a00 00:27:32.680 [2024-07-15 09:54:00.593853] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x33a11e634a00 00:27:32.680 [2024-07-15 09:54:00.593887] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.680 BaseBdev2 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:32.680 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:32.938 09:54:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:32.938 [ 00:27:32.938 { 00:27:32.938 "name": "BaseBdev2", 00:27:32.938 "aliases": [ 00:27:32.938 "29367ef9-4290-11ef-a0af-c98d8ee52a94" 00:27:32.938 ], 00:27:32.938 "product_name": "Malloc disk", 00:27:32.938 "block_size": 4096, 00:27:32.938 "num_blocks": 8192, 00:27:32.938 "uuid": "29367ef9-4290-11ef-a0af-c98d8ee52a94", 00:27:32.938 "assigned_rate_limits": { 00:27:32.938 "rw_ios_per_sec": 0, 00:27:32.938 "rw_mbytes_per_sec": 0, 00:27:32.938 "r_mbytes_per_sec": 0, 00:27:32.938 "w_mbytes_per_sec": 0 00:27:32.938 }, 00:27:32.938 "claimed": true, 00:27:32.938 "claim_type": "exclusive_write", 00:27:32.938 "zoned": false, 00:27:32.938 "supported_io_types": { 00:27:32.938 "read": true, 00:27:32.938 "write": true, 00:27:32.938 "unmap": true, 00:27:32.938 "flush": true, 00:27:32.938 "reset": true, 00:27:32.938 "nvme_admin": false, 00:27:32.938 "nvme_io": false, 00:27:32.938 "nvme_io_md": false, 00:27:32.938 "write_zeroes": true, 00:27:32.938 "zcopy": true, 00:27:32.938 "get_zone_info": false, 00:27:32.938 "zone_management": false, 00:27:32.938 "zone_append": false, 00:27:32.938 "compare": false, 00:27:32.938 "compare_and_write": false, 00:27:32.938 "abort": true, 00:27:32.938 "seek_hole": false, 00:27:32.938 "seek_data": false, 00:27:32.938 "copy": true, 00:27:32.938 "nvme_iov_md": false 00:27:32.938 }, 00:27:32.938 "memory_domains": [ 00:27:32.938 { 00:27:32.938 "dma_device_id": "system", 00:27:32.938 "dma_device_type": 1 00:27:32.938 }, 00:27:32.938 { 00:27:32.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.938 "dma_device_type": 2 00:27:32.938 } 00:27:32.938 ], 00:27:32.938 "driver_specific": {} 00:27:32.938 } 00:27:32.938 ] 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.938 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:33.197 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.197 "name": "Existed_Raid", 00:27:33.197 "uuid": "28c6cb77-4290-11ef-a0af-c98d8ee52a94", 00:27:33.197 "strip_size_kb": 0, 00:27:33.197 "state": "online", 00:27:33.197 "raid_level": "raid1", 00:27:33.197 "superblock": true, 00:27:33.197 "num_base_bdevs": 2, 00:27:33.197 "num_base_bdevs_discovered": 2, 00:27:33.197 "num_base_bdevs_operational": 2, 00:27:33.197 "base_bdevs_list": [ 00:27:33.197 { 00:27:33.197 "name": "BaseBdev1", 00:27:33.197 "uuid": "27f9185a-4290-11ef-a0af-c98d8ee52a94", 00:27:33.197 "is_configured": true, 00:27:33.197 "data_offset": 256, 00:27:33.197 "data_size": 7936 00:27:33.197 }, 00:27:33.197 { 00:27:33.197 "name": "BaseBdev2", 00:27:33.197 "uuid": "29367ef9-4290-11ef-a0af-c98d8ee52a94", 00:27:33.197 "is_configured": true, 00:27:33.197 "data_offset": 256, 00:27:33.197 "data_size": 7936 00:27:33.197 } 00:27:33.197 ] 00:27:33.197 }' 00:27:33.197 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.197 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:33.455 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:33.713 [2024-07-15 09:54:01.721731] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:33.713 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:33.713 "name": "Existed_Raid", 00:27:33.713 "aliases": [ 00:27:33.713 "28c6cb77-4290-11ef-a0af-c98d8ee52a94" 00:27:33.713 ], 00:27:33.713 "product_name": "Raid Volume", 00:27:33.713 "block_size": 4096, 00:27:33.713 "num_blocks": 7936, 00:27:33.713 "uuid": "28c6cb77-4290-11ef-a0af-c98d8ee52a94", 00:27:33.713 "assigned_rate_limits": { 00:27:33.713 "rw_ios_per_sec": 0, 00:27:33.713 "rw_mbytes_per_sec": 0, 00:27:33.713 "r_mbytes_per_sec": 0, 00:27:33.713 "w_mbytes_per_sec": 0 00:27:33.713 }, 00:27:33.713 "claimed": false, 00:27:33.713 "zoned": false, 00:27:33.713 "supported_io_types": { 00:27:33.713 "read": true, 00:27:33.713 "write": true, 00:27:33.713 "unmap": false, 00:27:33.713 "flush": false, 00:27:33.713 "reset": true, 00:27:33.713 "nvme_admin": false, 00:27:33.713 "nvme_io": false, 00:27:33.713 "nvme_io_md": false, 00:27:33.713 "write_zeroes": true, 00:27:33.713 "zcopy": false, 00:27:33.713 "get_zone_info": false, 00:27:33.713 "zone_management": false, 00:27:33.713 "zone_append": false, 00:27:33.713 "compare": false, 00:27:33.713 "compare_and_write": false, 00:27:33.713 "abort": false, 00:27:33.713 "seek_hole": false, 00:27:33.713 "seek_data": false, 00:27:33.713 "copy": false, 00:27:33.713 "nvme_iov_md": false 00:27:33.713 }, 00:27:33.713 "memory_domains": [ 00:27:33.713 { 00:27:33.713 "dma_device_id": "system", 00:27:33.713 "dma_device_type": 1 00:27:33.713 }, 00:27:33.713 { 00:27:33.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.713 "dma_device_type": 2 00:27:33.713 }, 00:27:33.713 { 00:27:33.713 "dma_device_id": "system", 00:27:33.713 "dma_device_type": 1 00:27:33.713 }, 00:27:33.713 { 00:27:33.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.713 "dma_device_type": 2 00:27:33.713 } 00:27:33.713 ], 00:27:33.713 "driver_specific": { 00:27:33.713 "raid": { 00:27:33.713 "uuid": "28c6cb77-4290-11ef-a0af-c98d8ee52a94", 00:27:33.713 "strip_size_kb": 0, 00:27:33.713 "state": "online", 00:27:33.713 "raid_level": "raid1", 00:27:33.713 "superblock": true, 00:27:33.713 "num_base_bdevs": 2, 00:27:33.713 "num_base_bdevs_discovered": 2, 00:27:33.713 "num_base_bdevs_operational": 2, 00:27:33.713 "base_bdevs_list": [ 00:27:33.713 { 00:27:33.713 "name": "BaseBdev1", 00:27:33.713 "uuid": "27f9185a-4290-11ef-a0af-c98d8ee52a94", 00:27:33.713 "is_configured": true, 00:27:33.713 "data_offset": 256, 00:27:33.713 "data_size": 7936 00:27:33.713 }, 00:27:33.713 { 00:27:33.713 "name": "BaseBdev2", 00:27:33.713 "uuid": "29367ef9-4290-11ef-a0af-c98d8ee52a94", 00:27:33.713 "is_configured": true, 00:27:33.713 "data_offset": 256, 00:27:33.713 "data_size": 7936 00:27:33.713 } 00:27:33.713 ] 00:27:33.713 } 00:27:33.713 } 00:27:33.713 }' 00:27:33.713 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:33.713 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:33.713 BaseBdev2' 00:27:33.713 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:33.713 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:33.713 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:33.972 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:33.972 "name": "BaseBdev1", 00:27:33.972 "aliases": [ 00:27:33.972 "27f9185a-4290-11ef-a0af-c98d8ee52a94" 00:27:33.972 ], 00:27:33.972 "product_name": "Malloc disk", 00:27:33.972 "block_size": 4096, 00:27:33.972 "num_blocks": 8192, 00:27:33.972 "uuid": "27f9185a-4290-11ef-a0af-c98d8ee52a94", 00:27:33.972 "assigned_rate_limits": { 00:27:33.972 "rw_ios_per_sec": 0, 00:27:33.972 "rw_mbytes_per_sec": 0, 00:27:33.972 "r_mbytes_per_sec": 0, 00:27:33.972 "w_mbytes_per_sec": 0 00:27:33.972 }, 00:27:33.972 "claimed": true, 00:27:33.972 "claim_type": "exclusive_write", 00:27:33.972 "zoned": false, 00:27:33.972 "supported_io_types": { 00:27:33.972 "read": true, 00:27:33.972 "write": true, 00:27:33.972 "unmap": true, 00:27:33.972 "flush": true, 00:27:33.972 "reset": true, 00:27:33.972 "nvme_admin": false, 00:27:33.972 "nvme_io": false, 00:27:33.972 "nvme_io_md": false, 00:27:33.972 "write_zeroes": true, 00:27:33.972 "zcopy": true, 00:27:33.972 "get_zone_info": false, 00:27:33.972 "zone_management": false, 00:27:33.972 "zone_append": false, 00:27:33.972 "compare": false, 00:27:33.972 "compare_and_write": false, 00:27:33.972 "abort": true, 00:27:33.972 "seek_hole": false, 00:27:33.972 "seek_data": false, 00:27:33.972 "copy": true, 00:27:33.972 "nvme_iov_md": false 00:27:33.972 }, 00:27:33.972 "memory_domains": [ 00:27:33.972 { 00:27:33.972 "dma_device_id": "system", 00:27:33.972 "dma_device_type": 1 00:27:33.972 }, 00:27:33.972 { 00:27:33.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.972 "dma_device_type": 2 00:27:33.972 } 00:27:33.972 ], 00:27:33.972 "driver_specific": {} 00:27:33.972 }' 00:27:33.972 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:33.972 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:33.972 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:33.972 09:54:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:33.972 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:34.231 "name": "BaseBdev2", 00:27:34.231 "aliases": [ 00:27:34.231 "29367ef9-4290-11ef-a0af-c98d8ee52a94" 00:27:34.231 ], 00:27:34.231 "product_name": "Malloc disk", 00:27:34.231 "block_size": 4096, 00:27:34.231 "num_blocks": 8192, 00:27:34.231 "uuid": "29367ef9-4290-11ef-a0af-c98d8ee52a94", 00:27:34.231 "assigned_rate_limits": { 00:27:34.231 "rw_ios_per_sec": 0, 00:27:34.231 "rw_mbytes_per_sec": 0, 00:27:34.231 "r_mbytes_per_sec": 0, 00:27:34.231 "w_mbytes_per_sec": 0 00:27:34.231 }, 00:27:34.231 "claimed": true, 00:27:34.231 "claim_type": "exclusive_write", 00:27:34.231 "zoned": false, 00:27:34.231 "supported_io_types": { 00:27:34.231 "read": true, 00:27:34.231 "write": true, 00:27:34.231 "unmap": true, 00:27:34.231 "flush": true, 00:27:34.231 "reset": true, 00:27:34.231 "nvme_admin": false, 00:27:34.231 "nvme_io": false, 00:27:34.231 "nvme_io_md": false, 00:27:34.231 "write_zeroes": true, 00:27:34.231 "zcopy": true, 00:27:34.231 "get_zone_info": false, 00:27:34.231 "zone_management": false, 00:27:34.231 "zone_append": false, 00:27:34.231 "compare": false, 00:27:34.231 "compare_and_write": false, 00:27:34.231 "abort": true, 00:27:34.231 "seek_hole": false, 00:27:34.231 "seek_data": false, 00:27:34.231 "copy": true, 00:27:34.231 "nvme_iov_md": false 00:27:34.231 }, 00:27:34.231 "memory_domains": [ 00:27:34.231 { 00:27:34.231 "dma_device_id": "system", 00:27:34.231 "dma_device_type": 1 00:27:34.231 }, 00:27:34.231 { 00:27:34.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.231 "dma_device_type": 2 00:27:34.231 } 00:27:34.231 ], 00:27:34.231 "driver_specific": {} 00:27:34.231 }' 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:34.231 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:34.489 [2024-07-15 09:54:02.557722] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.489 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:34.747 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.747 "name": "Existed_Raid", 00:27:34.747 "uuid": "28c6cb77-4290-11ef-a0af-c98d8ee52a94", 00:27:34.747 "strip_size_kb": 0, 00:27:34.747 "state": "online", 00:27:34.747 "raid_level": "raid1", 00:27:34.747 "superblock": true, 00:27:34.747 "num_base_bdevs": 2, 00:27:34.747 "num_base_bdevs_discovered": 1, 00:27:34.747 "num_base_bdevs_operational": 1, 00:27:34.747 "base_bdevs_list": [ 00:27:34.747 { 00:27:34.747 "name": null, 00:27:34.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.747 "is_configured": false, 00:27:34.747 "data_offset": 256, 00:27:34.747 "data_size": 7936 00:27:34.747 }, 00:27:34.747 { 00:27:34.747 "name": "BaseBdev2", 00:27:34.747 "uuid": "29367ef9-4290-11ef-a0af-c98d8ee52a94", 00:27:34.747 "is_configured": true, 00:27:34.747 "data_offset": 256, 00:27:34.747 "data_size": 7936 00:27:34.747 } 00:27:34.747 ] 00:27:34.747 }' 00:27:34.747 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.747 09:54:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.005 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:35.005 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:35.005 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.005 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:35.261 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:35.261 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:35.261 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:35.519 [2024-07-15 09:54:03.498541] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:35.519 [2024-07-15 09:54:03.498606] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:35.519 [2024-07-15 09:54:03.507439] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:35.519 [2024-07-15 09:54:03.507472] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:35.519 [2024-07-15 09:54:03.507476] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x33a11e634a00 name Existed_Raid, state offline 00:27:35.519 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:35.519 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:35.519 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:35.519 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65270 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65270 ']' 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65270 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65270 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:27:35.776 killing process with pid 65270 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65270' 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65270 00:27:35.776 [2024-07-15 09:54:03.741266] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:35.776 [2024-07-15 09:54:03.741310] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:35.776 09:54:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65270 00:27:36.034 09:54:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:27:36.034 00:27:36.034 real 0m7.843s 00:27:36.034 user 0m13.223s 00:27:36.034 sys 0m1.694s 00:27:36.034 09:54:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.034 ************************************ 00:27:36.034 END TEST raid_state_function_test_sb_4k 00:27:36.034 ************************************ 00:27:36.034 09:54:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.034 09:54:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:36.034 09:54:04 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:27:36.034 09:54:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:36.034 09:54:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.034 09:54:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:36.034 ************************************ 00:27:36.034 START TEST raid_superblock_test_4k 00:27:36.034 ************************************ 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65536 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65536 /var/tmp/spdk-raid.sock 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65536 ']' 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.034 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.034 [2024-07-15 09:54:04.074453] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:27:36.034 [2024-07-15 09:54:04.074770] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:27:36.968 EAL: TSC is not safe to use in SMP mode 00:27:36.968 EAL: TSC is not invariant 00:27:36.968 [2024-07-15 09:54:04.793217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.968 [2024-07-15 09:54:04.906501] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:36.968 [2024-07-15 09:54:04.908977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.968 [2024-07-15 09:54:04.909682] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:36.968 [2024-07-15 09:54:04.909693] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:36.968 09:54:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:27:37.226 malloc1 00:27:37.226 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:37.485 [2024-07-15 09:54:05.396658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:37.485 [2024-07-15 09:54:05.396734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.485 [2024-07-15 09:54:05.396746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177c4f034780 00:27:37.485 [2024-07-15 09:54:05.396753] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.485 [2024-07-15 09:54:05.397873] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.485 [2024-07-15 09:54:05.397903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:37.485 pt1 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:37.485 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:27:37.743 malloc2 00:27:37.743 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:38.006 [2024-07-15 09:54:05.884674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:38.006 [2024-07-15 09:54:05.884744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.006 [2024-07-15 09:54:05.884756] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177c4f034c80 00:27:38.006 [2024-07-15 09:54:05.884763] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.006 [2024-07-15 09:54:05.885586] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.006 [2024-07-15 09:54:05.885616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:38.006 pt2 00:27:38.006 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:38.006 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:38.006 09:54:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:38.006 [2024-07-15 09:54:06.084687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:38.006 [2024-07-15 09:54:06.085384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:38.006 [2024-07-15 09:54:06.085456] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x177c4f034f00 00:27:38.006 [2024-07-15 09:54:06.085465] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:38.006 [2024-07-15 09:54:06.085507] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177c4f097e20 00:27:38.006 [2024-07-15 09:54:06.085583] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x177c4f034f00 00:27:38.006 [2024-07-15 09:54:06.085586] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x177c4f034f00 00:27:38.006 [2024-07-15 09:54:06.085614] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.006 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.273 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.273 "name": "raid_bdev1", 00:27:38.273 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:38.273 "strip_size_kb": 0, 00:27:38.273 "state": "online", 00:27:38.273 "raid_level": "raid1", 00:27:38.273 "superblock": true, 00:27:38.273 "num_base_bdevs": 2, 00:27:38.273 "num_base_bdevs_discovered": 2, 00:27:38.273 "num_base_bdevs_operational": 2, 00:27:38.273 "base_bdevs_list": [ 00:27:38.273 { 00:27:38.273 "name": "pt1", 00:27:38.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:38.273 "is_configured": true, 00:27:38.273 "data_offset": 256, 00:27:38.273 "data_size": 7936 00:27:38.273 }, 00:27:38.273 { 00:27:38.273 "name": "pt2", 00:27:38.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:38.273 "is_configured": true, 00:27:38.273 "data_offset": 256, 00:27:38.273 "data_size": 7936 00:27:38.273 } 00:27:38.273 ] 00:27:38.273 }' 00:27:38.273 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.273 09:54:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:38.531 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:38.816 [2024-07-15 09:54:06.816822] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:38.816 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:38.816 "name": "raid_bdev1", 00:27:38.816 "aliases": [ 00:27:38.816 "2c7c5e96-4290-11ef-a0af-c98d8ee52a94" 00:27:38.816 ], 00:27:38.816 "product_name": "Raid Volume", 00:27:38.816 "block_size": 4096, 00:27:38.816 "num_blocks": 7936, 00:27:38.816 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:38.816 "assigned_rate_limits": { 00:27:38.816 "rw_ios_per_sec": 0, 00:27:38.816 "rw_mbytes_per_sec": 0, 00:27:38.816 "r_mbytes_per_sec": 0, 00:27:38.816 "w_mbytes_per_sec": 0 00:27:38.816 }, 00:27:38.816 "claimed": false, 00:27:38.816 "zoned": false, 00:27:38.816 "supported_io_types": { 00:27:38.816 "read": true, 00:27:38.816 "write": true, 00:27:38.816 "unmap": false, 00:27:38.816 "flush": false, 00:27:38.817 "reset": true, 00:27:38.817 "nvme_admin": false, 00:27:38.817 "nvme_io": false, 00:27:38.817 "nvme_io_md": false, 00:27:38.817 "write_zeroes": true, 00:27:38.817 "zcopy": false, 00:27:38.817 "get_zone_info": false, 00:27:38.817 "zone_management": false, 00:27:38.817 "zone_append": false, 00:27:38.817 "compare": false, 00:27:38.817 "compare_and_write": false, 00:27:38.817 "abort": false, 00:27:38.817 "seek_hole": false, 00:27:38.817 "seek_data": false, 00:27:38.817 "copy": false, 00:27:38.817 "nvme_iov_md": false 00:27:38.817 }, 00:27:38.817 "memory_domains": [ 00:27:38.817 { 00:27:38.817 "dma_device_id": "system", 00:27:38.817 "dma_device_type": 1 00:27:38.817 }, 00:27:38.817 { 00:27:38.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.817 "dma_device_type": 2 00:27:38.817 }, 00:27:38.817 { 00:27:38.817 "dma_device_id": "system", 00:27:38.817 "dma_device_type": 1 00:27:38.817 }, 00:27:38.817 { 00:27:38.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.817 "dma_device_type": 2 00:27:38.817 } 00:27:38.817 ], 00:27:38.817 "driver_specific": { 00:27:38.817 "raid": { 00:27:38.817 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:38.817 "strip_size_kb": 0, 00:27:38.817 "state": "online", 00:27:38.817 "raid_level": "raid1", 00:27:38.817 "superblock": true, 00:27:38.817 "num_base_bdevs": 2, 00:27:38.817 "num_base_bdevs_discovered": 2, 00:27:38.817 "num_base_bdevs_operational": 2, 00:27:38.817 "base_bdevs_list": [ 00:27:38.817 { 00:27:38.817 "name": "pt1", 00:27:38.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:38.817 "is_configured": true, 00:27:38.817 "data_offset": 256, 00:27:38.817 "data_size": 7936 00:27:38.817 }, 00:27:38.817 { 00:27:38.817 "name": "pt2", 00:27:38.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:38.817 "is_configured": true, 00:27:38.817 "data_offset": 256, 00:27:38.817 "data_size": 7936 00:27:38.817 } 00:27:38.817 ] 00:27:38.817 } 00:27:38.817 } 00:27:38.817 }' 00:27:38.817 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:38.817 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:38.817 pt2' 00:27:38.817 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:38.817 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:38.817 09:54:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:39.076 "name": "pt1", 00:27:39.076 "aliases": [ 00:27:39.076 "00000000-0000-0000-0000-000000000001" 00:27:39.076 ], 00:27:39.076 "product_name": "passthru", 00:27:39.076 "block_size": 4096, 00:27:39.076 "num_blocks": 8192, 00:27:39.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:39.076 "assigned_rate_limits": { 00:27:39.076 "rw_ios_per_sec": 0, 00:27:39.076 "rw_mbytes_per_sec": 0, 00:27:39.076 "r_mbytes_per_sec": 0, 00:27:39.076 "w_mbytes_per_sec": 0 00:27:39.076 }, 00:27:39.076 "claimed": true, 00:27:39.076 "claim_type": "exclusive_write", 00:27:39.076 "zoned": false, 00:27:39.076 "supported_io_types": { 00:27:39.076 "read": true, 00:27:39.076 "write": true, 00:27:39.076 "unmap": true, 00:27:39.076 "flush": true, 00:27:39.076 "reset": true, 00:27:39.076 "nvme_admin": false, 00:27:39.076 "nvme_io": false, 00:27:39.076 "nvme_io_md": false, 00:27:39.076 "write_zeroes": true, 00:27:39.076 "zcopy": true, 00:27:39.076 "get_zone_info": false, 00:27:39.076 "zone_management": false, 00:27:39.076 "zone_append": false, 00:27:39.076 "compare": false, 00:27:39.076 "compare_and_write": false, 00:27:39.076 "abort": true, 00:27:39.076 "seek_hole": false, 00:27:39.076 "seek_data": false, 00:27:39.076 "copy": true, 00:27:39.076 "nvme_iov_md": false 00:27:39.076 }, 00:27:39.076 "memory_domains": [ 00:27:39.076 { 00:27:39.076 "dma_device_id": "system", 00:27:39.076 "dma_device_type": 1 00:27:39.076 }, 00:27:39.076 { 00:27:39.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.076 "dma_device_type": 2 00:27:39.076 } 00:27:39.076 ], 00:27:39.076 "driver_specific": { 00:27:39.076 "passthru": { 00:27:39.076 "name": "pt1", 00:27:39.076 "base_bdev_name": "malloc1" 00:27:39.076 } 00:27:39.076 } 00:27:39.076 }' 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:39.076 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:39.334 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:39.334 "name": "pt2", 00:27:39.334 "aliases": [ 00:27:39.334 "00000000-0000-0000-0000-000000000002" 00:27:39.334 ], 00:27:39.334 "product_name": "passthru", 00:27:39.334 "block_size": 4096, 00:27:39.334 "num_blocks": 8192, 00:27:39.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:39.334 "assigned_rate_limits": { 00:27:39.334 "rw_ios_per_sec": 0, 00:27:39.334 "rw_mbytes_per_sec": 0, 00:27:39.334 "r_mbytes_per_sec": 0, 00:27:39.334 "w_mbytes_per_sec": 0 00:27:39.334 }, 00:27:39.334 "claimed": true, 00:27:39.334 "claim_type": "exclusive_write", 00:27:39.334 "zoned": false, 00:27:39.334 "supported_io_types": { 00:27:39.334 "read": true, 00:27:39.334 "write": true, 00:27:39.334 "unmap": true, 00:27:39.334 "flush": true, 00:27:39.334 "reset": true, 00:27:39.334 "nvme_admin": false, 00:27:39.334 "nvme_io": false, 00:27:39.334 "nvme_io_md": false, 00:27:39.334 "write_zeroes": true, 00:27:39.334 "zcopy": true, 00:27:39.334 "get_zone_info": false, 00:27:39.334 "zone_management": false, 00:27:39.334 "zone_append": false, 00:27:39.334 "compare": false, 00:27:39.334 "compare_and_write": false, 00:27:39.334 "abort": true, 00:27:39.334 "seek_hole": false, 00:27:39.334 "seek_data": false, 00:27:39.334 "copy": true, 00:27:39.334 "nvme_iov_md": false 00:27:39.334 }, 00:27:39.334 "memory_domains": [ 00:27:39.334 { 00:27:39.334 "dma_device_id": "system", 00:27:39.334 "dma_device_type": 1 00:27:39.334 }, 00:27:39.334 { 00:27:39.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.334 "dma_device_type": 2 00:27:39.334 } 00:27:39.334 ], 00:27:39.334 "driver_specific": { 00:27:39.334 "passthru": { 00:27:39.334 "name": "pt2", 00:27:39.334 "base_bdev_name": "malloc2" 00:27:39.334 } 00:27:39.334 } 00:27:39.334 }' 00:27:39.334 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.334 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.334 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:39.334 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:39.593 [2024-07-15 09:54:07.676844] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2c7c5e96-4290-11ef-a0af-c98d8ee52a94 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 2c7c5e96-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:39.593 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:39.851 [2024-07-15 09:54:07.892800] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:39.851 [2024-07-15 09:54:07.892833] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:39.851 [2024-07-15 09:54:07.892863] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:39.851 [2024-07-15 09:54:07.892886] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:39.851 [2024-07-15 09:54:07.892892] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177c4f034f00 name raid_bdev1, state offline 00:27:39.851 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.851 09:54:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:40.108 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:40.108 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:40.108 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:40.108 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:40.364 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:40.364 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:40.621 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:40.621 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:40.879 09:54:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:40.879 [2024-07-15 09:54:08.980903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:40.879 [2024-07-15 09:54:08.981611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:40.879 [2024-07-15 09:54:08.981644] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:40.879 [2024-07-15 09:54:08.981703] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:40.879 [2024-07-15 09:54:08.981712] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:40.880 [2024-07-15 09:54:08.981716] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177c4f034c80 name raid_bdev1, state configuring 00:27:41.173 request: 00:27:41.173 { 00:27:41.173 "name": "raid_bdev1", 00:27:41.173 "raid_level": "raid1", 00:27:41.173 "base_bdevs": [ 00:27:41.173 "malloc1", 00:27:41.173 "malloc2" 00:27:41.173 ], 00:27:41.173 "superblock": false, 00:27:41.173 "method": "bdev_raid_create", 00:27:41.173 "req_id": 1 00:27:41.173 } 00:27:41.173 Got JSON-RPC error response 00:27:41.173 response: 00:27:41.173 { 00:27:41.173 "code": -17, 00:27:41.173 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:41.173 } 00:27:41.173 09:54:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:27:41.173 09:54:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:41.173 09:54:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:41.173 09:54:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:41.173 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.173 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:41.434 [2024-07-15 09:54:09.444905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:41.434 [2024-07-15 09:54:09.444968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:41.434 [2024-07-15 09:54:09.444981] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177c4f034780 00:27:41.434 [2024-07-15 09:54:09.444988] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:41.434 [2024-07-15 09:54:09.445761] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:41.434 [2024-07-15 09:54:09.445789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:41.434 [2024-07-15 09:54:09.445813] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:41.434 [2024-07-15 09:54:09.445826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:41.434 pt1 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.434 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.691 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.691 "name": "raid_bdev1", 00:27:41.691 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:41.691 "strip_size_kb": 0, 00:27:41.691 "state": "configuring", 00:27:41.691 "raid_level": "raid1", 00:27:41.691 "superblock": true, 00:27:41.691 "num_base_bdevs": 2, 00:27:41.691 "num_base_bdevs_discovered": 1, 00:27:41.691 "num_base_bdevs_operational": 2, 00:27:41.691 "base_bdevs_list": [ 00:27:41.691 { 00:27:41.691 "name": "pt1", 00:27:41.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:41.691 "is_configured": true, 00:27:41.691 "data_offset": 256, 00:27:41.691 "data_size": 7936 00:27:41.691 }, 00:27:41.691 { 00:27:41.691 "name": null, 00:27:41.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:41.691 "is_configured": false, 00:27:41.691 "data_offset": 256, 00:27:41.691 "data_size": 7936 00:27:41.691 } 00:27:41.691 ] 00:27:41.691 }' 00:27:41.691 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.691 09:54:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:41.949 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:27:41.949 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:41.949 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:41.949 09:54:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:42.208 [2024-07-15 09:54:10.144963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:42.208 [2024-07-15 09:54:10.145040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:42.208 [2024-07-15 09:54:10.145053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177c4f034f00 00:27:42.208 [2024-07-15 09:54:10.145060] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:42.208 [2024-07-15 09:54:10.145227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:42.208 [2024-07-15 09:54:10.145236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:42.208 [2024-07-15 09:54:10.145260] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:42.208 [2024-07-15 09:54:10.145268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:42.208 [2024-07-15 09:54:10.145310] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x177c4f035180 00:27:42.208 [2024-07-15 09:54:10.145314] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:42.208 [2024-07-15 09:54:10.145332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177c4f097e20 00:27:42.208 [2024-07-15 09:54:10.145382] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x177c4f035180 00:27:42.208 [2024-07-15 09:54:10.145386] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x177c4f035180 00:27:42.208 [2024-07-15 09:54:10.145402] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.208 pt2 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.208 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.467 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.467 "name": "raid_bdev1", 00:27:42.467 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:42.467 "strip_size_kb": 0, 00:27:42.467 "state": "online", 00:27:42.467 "raid_level": "raid1", 00:27:42.467 "superblock": true, 00:27:42.467 "num_base_bdevs": 2, 00:27:42.467 "num_base_bdevs_discovered": 2, 00:27:42.467 "num_base_bdevs_operational": 2, 00:27:42.467 "base_bdevs_list": [ 00:27:42.467 { 00:27:42.467 "name": "pt1", 00:27:42.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:42.467 "is_configured": true, 00:27:42.467 "data_offset": 256, 00:27:42.467 "data_size": 7936 00:27:42.467 }, 00:27:42.467 { 00:27:42.467 "name": "pt2", 00:27:42.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:42.467 "is_configured": true, 00:27:42.467 "data_offset": 256, 00:27:42.467 "data_size": 7936 00:27:42.467 } 00:27:42.467 ] 00:27:42.467 }' 00:27:42.467 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.467 09:54:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:42.725 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:42.996 [2024-07-15 09:54:10.881068] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:42.996 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:42.996 "name": "raid_bdev1", 00:27:42.996 "aliases": [ 00:27:42.996 "2c7c5e96-4290-11ef-a0af-c98d8ee52a94" 00:27:42.996 ], 00:27:42.996 "product_name": "Raid Volume", 00:27:42.996 "block_size": 4096, 00:27:42.996 "num_blocks": 7936, 00:27:42.996 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:42.996 "assigned_rate_limits": { 00:27:42.996 "rw_ios_per_sec": 0, 00:27:42.996 "rw_mbytes_per_sec": 0, 00:27:42.996 "r_mbytes_per_sec": 0, 00:27:42.996 "w_mbytes_per_sec": 0 00:27:42.996 }, 00:27:42.996 "claimed": false, 00:27:42.996 "zoned": false, 00:27:42.996 "supported_io_types": { 00:27:42.996 "read": true, 00:27:42.996 "write": true, 00:27:42.996 "unmap": false, 00:27:42.996 "flush": false, 00:27:42.996 "reset": true, 00:27:42.996 "nvme_admin": false, 00:27:42.996 "nvme_io": false, 00:27:42.996 "nvme_io_md": false, 00:27:42.996 "write_zeroes": true, 00:27:42.996 "zcopy": false, 00:27:42.996 "get_zone_info": false, 00:27:42.996 "zone_management": false, 00:27:42.996 "zone_append": false, 00:27:42.997 "compare": false, 00:27:42.997 "compare_and_write": false, 00:27:42.997 "abort": false, 00:27:42.997 "seek_hole": false, 00:27:42.997 "seek_data": false, 00:27:42.997 "copy": false, 00:27:42.997 "nvme_iov_md": false 00:27:42.997 }, 00:27:42.997 "memory_domains": [ 00:27:42.997 { 00:27:42.997 "dma_device_id": "system", 00:27:42.997 "dma_device_type": 1 00:27:42.997 }, 00:27:42.997 { 00:27:42.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.997 "dma_device_type": 2 00:27:42.997 }, 00:27:42.997 { 00:27:42.997 "dma_device_id": "system", 00:27:42.997 "dma_device_type": 1 00:27:42.997 }, 00:27:42.997 { 00:27:42.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.997 "dma_device_type": 2 00:27:42.997 } 00:27:42.997 ], 00:27:42.997 "driver_specific": { 00:27:42.997 "raid": { 00:27:42.997 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:42.997 "strip_size_kb": 0, 00:27:42.997 "state": "online", 00:27:42.997 "raid_level": "raid1", 00:27:42.997 "superblock": true, 00:27:42.997 "num_base_bdevs": 2, 00:27:42.997 "num_base_bdevs_discovered": 2, 00:27:42.997 "num_base_bdevs_operational": 2, 00:27:42.997 "base_bdevs_list": [ 00:27:42.997 { 00:27:42.997 "name": "pt1", 00:27:42.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:42.997 "is_configured": true, 00:27:42.997 "data_offset": 256, 00:27:42.997 "data_size": 7936 00:27:42.997 }, 00:27:42.997 { 00:27:42.997 "name": "pt2", 00:27:42.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:42.997 "is_configured": true, 00:27:42.997 "data_offset": 256, 00:27:42.997 "data_size": 7936 00:27:42.997 } 00:27:42.997 ] 00:27:42.997 } 00:27:42.997 } 00:27:42.997 }' 00:27:42.997 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:42.997 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:42.997 pt2' 00:27:42.997 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:42.997 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:42.997 09:54:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:43.255 "name": "pt1", 00:27:43.255 "aliases": [ 00:27:43.255 "00000000-0000-0000-0000-000000000001" 00:27:43.255 ], 00:27:43.255 "product_name": "passthru", 00:27:43.255 "block_size": 4096, 00:27:43.255 "num_blocks": 8192, 00:27:43.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:43.255 "assigned_rate_limits": { 00:27:43.255 "rw_ios_per_sec": 0, 00:27:43.255 "rw_mbytes_per_sec": 0, 00:27:43.255 "r_mbytes_per_sec": 0, 00:27:43.255 "w_mbytes_per_sec": 0 00:27:43.255 }, 00:27:43.255 "claimed": true, 00:27:43.255 "claim_type": "exclusive_write", 00:27:43.255 "zoned": false, 00:27:43.255 "supported_io_types": { 00:27:43.255 "read": true, 00:27:43.255 "write": true, 00:27:43.255 "unmap": true, 00:27:43.255 "flush": true, 00:27:43.255 "reset": true, 00:27:43.255 "nvme_admin": false, 00:27:43.255 "nvme_io": false, 00:27:43.255 "nvme_io_md": false, 00:27:43.255 "write_zeroes": true, 00:27:43.255 "zcopy": true, 00:27:43.255 "get_zone_info": false, 00:27:43.255 "zone_management": false, 00:27:43.255 "zone_append": false, 00:27:43.255 "compare": false, 00:27:43.255 "compare_and_write": false, 00:27:43.255 "abort": true, 00:27:43.255 "seek_hole": false, 00:27:43.255 "seek_data": false, 00:27:43.255 "copy": true, 00:27:43.255 "nvme_iov_md": false 00:27:43.255 }, 00:27:43.255 "memory_domains": [ 00:27:43.255 { 00:27:43.255 "dma_device_id": "system", 00:27:43.255 "dma_device_type": 1 00:27:43.255 }, 00:27:43.255 { 00:27:43.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.255 "dma_device_type": 2 00:27:43.255 } 00:27:43.255 ], 00:27:43.255 "driver_specific": { 00:27:43.255 "passthru": { 00:27:43.255 "name": "pt1", 00:27:43.255 "base_bdev_name": "malloc1" 00:27:43.255 } 00:27:43.255 } 00:27:43.255 }' 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:43.255 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:43.514 "name": "pt2", 00:27:43.514 "aliases": [ 00:27:43.514 "00000000-0000-0000-0000-000000000002" 00:27:43.514 ], 00:27:43.514 "product_name": "passthru", 00:27:43.514 "block_size": 4096, 00:27:43.514 "num_blocks": 8192, 00:27:43.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:43.514 "assigned_rate_limits": { 00:27:43.514 "rw_ios_per_sec": 0, 00:27:43.514 "rw_mbytes_per_sec": 0, 00:27:43.514 "r_mbytes_per_sec": 0, 00:27:43.514 "w_mbytes_per_sec": 0 00:27:43.514 }, 00:27:43.514 "claimed": true, 00:27:43.514 "claim_type": "exclusive_write", 00:27:43.514 "zoned": false, 00:27:43.514 "supported_io_types": { 00:27:43.514 "read": true, 00:27:43.514 "write": true, 00:27:43.514 "unmap": true, 00:27:43.514 "flush": true, 00:27:43.514 "reset": true, 00:27:43.514 "nvme_admin": false, 00:27:43.514 "nvme_io": false, 00:27:43.514 "nvme_io_md": false, 00:27:43.514 "write_zeroes": true, 00:27:43.514 "zcopy": true, 00:27:43.514 "get_zone_info": false, 00:27:43.514 "zone_management": false, 00:27:43.514 "zone_append": false, 00:27:43.514 "compare": false, 00:27:43.514 "compare_and_write": false, 00:27:43.514 "abort": true, 00:27:43.514 "seek_hole": false, 00:27:43.514 "seek_data": false, 00:27:43.514 "copy": true, 00:27:43.514 "nvme_iov_md": false 00:27:43.514 }, 00:27:43.514 "memory_domains": [ 00:27:43.514 { 00:27:43.514 "dma_device_id": "system", 00:27:43.514 "dma_device_type": 1 00:27:43.514 }, 00:27:43.514 { 00:27:43.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.514 "dma_device_type": 2 00:27:43.514 } 00:27:43.514 ], 00:27:43.514 "driver_specific": { 00:27:43.514 "passthru": { 00:27:43.514 "name": "pt2", 00:27:43.514 "base_bdev_name": "malloc2" 00:27:43.514 } 00:27:43.514 } 00:27:43.514 }' 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:43.514 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:43.772 [2024-07-15 09:54:11.705081] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:43.772 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 2c7c5e96-4290-11ef-a0af-c98d8ee52a94 '!=' 2c7c5e96-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:43.772 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:43.772 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:43.772 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:27:43.772 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:44.031 [2024-07-15 09:54:11.905106] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.031 09:54:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.290 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:44.290 "name": "raid_bdev1", 00:27:44.290 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:44.290 "strip_size_kb": 0, 00:27:44.290 "state": "online", 00:27:44.290 "raid_level": "raid1", 00:27:44.290 "superblock": true, 00:27:44.290 "num_base_bdevs": 2, 00:27:44.290 "num_base_bdevs_discovered": 1, 00:27:44.290 "num_base_bdevs_operational": 1, 00:27:44.290 "base_bdevs_list": [ 00:27:44.290 { 00:27:44.290 "name": null, 00:27:44.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.290 "is_configured": false, 00:27:44.290 "data_offset": 256, 00:27:44.290 "data_size": 7936 00:27:44.290 }, 00:27:44.290 { 00:27:44.290 "name": "pt2", 00:27:44.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:44.290 "is_configured": true, 00:27:44.290 "data_offset": 256, 00:27:44.290 "data_size": 7936 00:27:44.290 } 00:27:44.290 ] 00:27:44.290 }' 00:27:44.290 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:44.290 09:54:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:44.549 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:44.549 [2024-07-15 09:54:12.637136] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:44.549 [2024-07-15 09:54:12.637172] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:44.549 [2024-07-15 09:54:12.637204] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:44.549 [2024-07-15 09:54:12.637224] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:44.549 [2024-07-15 09:54:12.637232] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177c4f035180 name raid_bdev1, state offline 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:44.808 09:54:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:45.067 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:45.067 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:45.067 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:45.067 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:45.067 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:27:45.067 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:45.339 [2024-07-15 09:54:13.221146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:45.339 [2024-07-15 09:54:13.221222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:45.339 [2024-07-15 09:54:13.221232] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177c4f034f00 00:27:45.339 [2024-07-15 09:54:13.221239] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:45.339 [2024-07-15 09:54:13.222021] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:45.339 [2024-07-15 09:54:13.222050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:45.339 [2024-07-15 09:54:13.222072] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:45.339 [2024-07-15 09:54:13.222083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:45.339 [2024-07-15 09:54:13.222103] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x177c4f035180 00:27:45.339 [2024-07-15 09:54:13.222106] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:45.339 [2024-07-15 09:54:13.222126] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177c4f097e20 00:27:45.339 [2024-07-15 09:54:13.222172] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x177c4f035180 00:27:45.339 [2024-07-15 09:54:13.222181] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x177c4f035180 00:27:45.339 [2024-07-15 09:54:13.222198] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.339 pt2 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.339 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.602 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.602 "name": "raid_bdev1", 00:27:45.602 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:45.602 "strip_size_kb": 0, 00:27:45.602 "state": "online", 00:27:45.602 "raid_level": "raid1", 00:27:45.602 "superblock": true, 00:27:45.602 "num_base_bdevs": 2, 00:27:45.602 "num_base_bdevs_discovered": 1, 00:27:45.602 "num_base_bdevs_operational": 1, 00:27:45.602 "base_bdevs_list": [ 00:27:45.602 { 00:27:45.602 "name": null, 00:27:45.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.602 "is_configured": false, 00:27:45.602 "data_offset": 256, 00:27:45.602 "data_size": 7936 00:27:45.602 }, 00:27:45.602 { 00:27:45.602 "name": "pt2", 00:27:45.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:45.602 "is_configured": true, 00:27:45.602 "data_offset": 256, 00:27:45.602 "data_size": 7936 00:27:45.602 } 00:27:45.602 ] 00:27:45.602 }' 00:27:45.602 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.602 09:54:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:45.861 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:45.861 [2024-07-15 09:54:13.897172] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:45.861 [2024-07-15 09:54:13.897198] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:45.861 [2024-07-15 09:54:13.897213] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:45.861 [2024-07-15 09:54:13.897222] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:45.861 [2024-07-15 09:54:13.897226] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177c4f035180 name raid_bdev1, state offline 00:27:45.861 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.861 09:54:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:46.119 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:46.119 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:46.119 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:27:46.119 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:46.378 [2024-07-15 09:54:14.305219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:46.378 [2024-07-15 09:54:14.305271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:46.378 [2024-07-15 09:54:14.305280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177c4f034c80 00:27:46.379 [2024-07-15 09:54:14.305287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:46.379 [2024-07-15 09:54:14.306026] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:46.379 [2024-07-15 09:54:14.306050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:46.379 [2024-07-15 09:54:14.306070] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:46.379 [2024-07-15 09:54:14.306081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:46.379 [2024-07-15 09:54:14.306105] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:46.379 [2024-07-15 09:54:14.306108] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:46.379 [2024-07-15 09:54:14.306112] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177c4f034780 name raid_bdev1, state configuring 00:27:46.379 [2024-07-15 09:54:14.306119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:46.379 [2024-07-15 09:54:14.306130] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x177c4f034780 00:27:46.379 [2024-07-15 09:54:14.306133] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:46.379 [2024-07-15 09:54:14.306149] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177c4f097e20 00:27:46.379 [2024-07-15 09:54:14.306185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x177c4f034780 00:27:46.379 [2024-07-15 09:54:14.306188] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x177c4f034780 00:27:46.379 [2024-07-15 09:54:14.306203] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.379 pt1 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.379 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.637 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:46.637 "name": "raid_bdev1", 00:27:46.637 "uuid": "2c7c5e96-4290-11ef-a0af-c98d8ee52a94", 00:27:46.637 "strip_size_kb": 0, 00:27:46.637 "state": "online", 00:27:46.637 "raid_level": "raid1", 00:27:46.637 "superblock": true, 00:27:46.637 "num_base_bdevs": 2, 00:27:46.637 "num_base_bdevs_discovered": 1, 00:27:46.637 "num_base_bdevs_operational": 1, 00:27:46.637 "base_bdevs_list": [ 00:27:46.637 { 00:27:46.637 "name": null, 00:27:46.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.637 "is_configured": false, 00:27:46.637 "data_offset": 256, 00:27:46.637 "data_size": 7936 00:27:46.637 }, 00:27:46.637 { 00:27:46.637 "name": "pt2", 00:27:46.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:46.637 "is_configured": true, 00:27:46.637 "data_offset": 256, 00:27:46.637 "data_size": 7936 00:27:46.637 } 00:27:46.637 ] 00:27:46.637 }' 00:27:46.637 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:46.637 09:54:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:46.896 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:46.896 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:46.896 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:46.896 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:46.896 09:54:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:47.155 [2024-07-15 09:54:15.217308] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2c7c5e96-4290-11ef-a0af-c98d8ee52a94 '!=' 2c7c5e96-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65536 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65536 ']' 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65536 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65536 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:27:47.155 killing process with pid 65536 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65536' 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65536 00:27:47.155 [2024-07-15 09:54:15.250876] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.155 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65536 00:27:47.155 [2024-07-15 09:54:15.250892] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.155 [2024-07-15 09:54:15.250901] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.155 [2024-07-15 09:54:15.250905] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177c4f034780 name raid_bdev1, state offline 00:27:47.414 [2024-07-15 09:54:15.268661] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:47.672 09:54:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:27:47.672 00:27:47.672 real 0m11.464s 00:27:47.672 user 0m19.842s 00:27:47.672 sys 0m2.333s 00:27:47.672 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.672 ************************************ 00:27:47.672 END TEST raid_superblock_test_4k 00:27:47.672 ************************************ 00:27:47.672 09:54:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:47.672 09:54:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:47.672 09:54:15 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:27:47.672 09:54:15 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:27:47.673 09:54:15 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:47.673 09:54:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:47.673 09:54:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.673 09:54:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:47.673 ************************************ 00:27:47.673 START TEST raid_state_function_test_sb_md_separate 00:27:47.673 ************************************ 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=65919 00:27:47.673 Process raid pid: 65919 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65919' 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 65919 /var/tmp/spdk-raid.sock 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 65919 ']' 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.673 09:54:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.673 [2024-07-15 09:54:15.603477] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:27:47.673 [2024-07-15 09:54:15.603702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:27:48.241 EAL: TSC is not safe to use in SMP mode 00:27:48.241 EAL: TSC is not invariant 00:27:48.241 [2024-07-15 09:54:16.342882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.500 [2024-07-15 09:54:16.479577] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:48.500 [2024-07-15 09:54:16.482708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.500 [2024-07-15 09:54:16.483819] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.500 [2024-07-15 09:54:16.483847] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.758 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.758 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:27:48.758 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:49.016 [2024-07-15 09:54:16.879102] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.016 [2024-07-15 09:54:16.879168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.016 [2024-07-15 09:54:16.879172] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.016 [2024-07-15 09:54:16.879179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.016 09:54:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.275 09:54:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:49.275 "name": "Existed_Raid", 00:27:49.275 "uuid": "32eb7789-4290-11ef-a0af-c98d8ee52a94", 00:27:49.275 "strip_size_kb": 0, 00:27:49.275 "state": "configuring", 00:27:49.275 "raid_level": "raid1", 00:27:49.275 "superblock": true, 00:27:49.275 "num_base_bdevs": 2, 00:27:49.275 "num_base_bdevs_discovered": 0, 00:27:49.275 "num_base_bdevs_operational": 2, 00:27:49.275 "base_bdevs_list": [ 00:27:49.275 { 00:27:49.275 "name": "BaseBdev1", 00:27:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.275 "is_configured": false, 00:27:49.275 "data_offset": 0, 00:27:49.275 "data_size": 0 00:27:49.275 }, 00:27:49.275 { 00:27:49.275 "name": "BaseBdev2", 00:27:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.275 "is_configured": false, 00:27:49.275 "data_offset": 0, 00:27:49.275 "data_size": 0 00:27:49.275 } 00:27:49.275 ] 00:27:49.275 }' 00:27:49.275 09:54:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:49.275 09:54:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.534 09:54:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:49.534 [2024-07-15 09:54:17.627107] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:49.534 [2024-07-15 09:54:17.627131] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d1f47a34500 name Existed_Raid, state configuring 00:27:49.794 09:54:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:49.794 [2024-07-15 09:54:17.847128] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.794 [2024-07-15 09:54:17.847174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.794 [2024-07-15 09:54:17.847177] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.794 [2024-07-15 09:54:17.847183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.794 09:54:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:50.052 [2024-07-15 09:54:18.080265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:50.052 BaseBdev1 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:50.052 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:50.311 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:50.569 [ 00:27:50.569 { 00:27:50.569 "name": "BaseBdev1", 00:27:50.569 "aliases": [ 00:27:50.569 "33a29663-4290-11ef-a0af-c98d8ee52a94" 00:27:50.569 ], 00:27:50.569 "product_name": "Malloc disk", 00:27:50.569 "block_size": 4096, 00:27:50.569 "num_blocks": 8192, 00:27:50.569 "uuid": "33a29663-4290-11ef-a0af-c98d8ee52a94", 00:27:50.569 "md_size": 32, 00:27:50.569 "md_interleave": false, 00:27:50.569 "dif_type": 0, 00:27:50.569 "assigned_rate_limits": { 00:27:50.569 "rw_ios_per_sec": 0, 00:27:50.569 "rw_mbytes_per_sec": 0, 00:27:50.569 "r_mbytes_per_sec": 0, 00:27:50.569 "w_mbytes_per_sec": 0 00:27:50.569 }, 00:27:50.569 "claimed": true, 00:27:50.569 "claim_type": "exclusive_write", 00:27:50.569 "zoned": false, 00:27:50.569 "supported_io_types": { 00:27:50.569 "read": true, 00:27:50.569 "write": true, 00:27:50.569 "unmap": true, 00:27:50.569 "flush": true, 00:27:50.569 "reset": true, 00:27:50.569 "nvme_admin": false, 00:27:50.569 "nvme_io": false, 00:27:50.569 "nvme_io_md": false, 00:27:50.569 "write_zeroes": true, 00:27:50.569 "zcopy": true, 00:27:50.569 "get_zone_info": false, 00:27:50.569 "zone_management": false, 00:27:50.569 "zone_append": false, 00:27:50.569 "compare": false, 00:27:50.569 "compare_and_write": false, 00:27:50.569 "abort": true, 00:27:50.569 "seek_hole": false, 00:27:50.569 "seek_data": false, 00:27:50.569 "copy": true, 00:27:50.569 "nvme_iov_md": false 00:27:50.569 }, 00:27:50.569 "memory_domains": [ 00:27:50.569 { 00:27:50.570 "dma_device_id": "system", 00:27:50.570 "dma_device_type": 1 00:27:50.570 }, 00:27:50.570 { 00:27:50.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.570 "dma_device_type": 2 00:27:50.570 } 00:27:50.570 ], 00:27:50.570 "driver_specific": {} 00:27:50.570 } 00:27:50.570 ] 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.570 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.827 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.828 "name": "Existed_Raid", 00:27:50.828 "uuid": "337f2d41-4290-11ef-a0af-c98d8ee52a94", 00:27:50.828 "strip_size_kb": 0, 00:27:50.828 "state": "configuring", 00:27:50.828 "raid_level": "raid1", 00:27:50.828 "superblock": true, 00:27:50.828 "num_base_bdevs": 2, 00:27:50.828 "num_base_bdevs_discovered": 1, 00:27:50.828 "num_base_bdevs_operational": 2, 00:27:50.828 "base_bdevs_list": [ 00:27:50.828 { 00:27:50.828 "name": "BaseBdev1", 00:27:50.828 "uuid": "33a29663-4290-11ef-a0af-c98d8ee52a94", 00:27:50.828 "is_configured": true, 00:27:50.828 "data_offset": 256, 00:27:50.828 "data_size": 7936 00:27:50.828 }, 00:27:50.828 { 00:27:50.828 "name": "BaseBdev2", 00:27:50.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.828 "is_configured": false, 00:27:50.828 "data_offset": 0, 00:27:50.828 "data_size": 0 00:27:50.828 } 00:27:50.828 ] 00:27:50.828 }' 00:27:50.828 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.828 09:54:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.085 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:51.344 [2024-07-15 09:54:19.323219] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:51.344 [2024-07-15 09:54:19.323250] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d1f47a34500 name Existed_Raid, state configuring 00:27:51.344 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:51.602 [2024-07-15 09:54:19.527237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:51.602 [2024-07-15 09:54:19.528102] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:51.602 [2024-07-15 09:54:19.528149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.602 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.862 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.862 "name": "Existed_Raid", 00:27:51.862 "uuid": "347f8a79-4290-11ef-a0af-c98d8ee52a94", 00:27:51.862 "strip_size_kb": 0, 00:27:51.862 "state": "configuring", 00:27:51.862 "raid_level": "raid1", 00:27:51.862 "superblock": true, 00:27:51.862 "num_base_bdevs": 2, 00:27:51.862 "num_base_bdevs_discovered": 1, 00:27:51.862 "num_base_bdevs_operational": 2, 00:27:51.862 "base_bdevs_list": [ 00:27:51.862 { 00:27:51.862 "name": "BaseBdev1", 00:27:51.862 "uuid": "33a29663-4290-11ef-a0af-c98d8ee52a94", 00:27:51.862 "is_configured": true, 00:27:51.862 "data_offset": 256, 00:27:51.862 "data_size": 7936 00:27:51.862 }, 00:27:51.862 { 00:27:51.862 "name": "BaseBdev2", 00:27:51.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.862 "is_configured": false, 00:27:51.862 "data_offset": 0, 00:27:51.862 "data_size": 0 00:27:51.862 } 00:27:51.862 ] 00:27:51.862 }' 00:27:51.862 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.862 09:54:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:52.120 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:52.379 [2024-07-15 09:54:20.267375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:52.379 [2024-07-15 09:54:20.267429] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2d1f47a34a00 00:27:52.379 [2024-07-15 09:54:20.267435] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:52.379 [2024-07-15 09:54:20.267452] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d1f47a97e20 00:27:52.379 [2024-07-15 09:54:20.267481] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2d1f47a34a00 00:27:52.379 [2024-07-15 09:54:20.267484] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2d1f47a34a00 00:27:52.379 [2024-07-15 09:54:20.267496] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.379 BaseBdev2 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:52.379 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:52.638 [ 00:27:52.638 { 00:27:52.638 "name": "BaseBdev2", 00:27:52.638 "aliases": [ 00:27:52.638 "34f076bf-4290-11ef-a0af-c98d8ee52a94" 00:27:52.638 ], 00:27:52.638 "product_name": "Malloc disk", 00:27:52.638 "block_size": 4096, 00:27:52.638 "num_blocks": 8192, 00:27:52.638 "uuid": "34f076bf-4290-11ef-a0af-c98d8ee52a94", 00:27:52.638 "md_size": 32, 00:27:52.638 "md_interleave": false, 00:27:52.638 "dif_type": 0, 00:27:52.638 "assigned_rate_limits": { 00:27:52.638 "rw_ios_per_sec": 0, 00:27:52.638 "rw_mbytes_per_sec": 0, 00:27:52.638 "r_mbytes_per_sec": 0, 00:27:52.638 "w_mbytes_per_sec": 0 00:27:52.638 }, 00:27:52.638 "claimed": true, 00:27:52.638 "claim_type": "exclusive_write", 00:27:52.638 "zoned": false, 00:27:52.638 "supported_io_types": { 00:27:52.638 "read": true, 00:27:52.638 "write": true, 00:27:52.638 "unmap": true, 00:27:52.638 "flush": true, 00:27:52.638 "reset": true, 00:27:52.638 "nvme_admin": false, 00:27:52.638 "nvme_io": false, 00:27:52.638 "nvme_io_md": false, 00:27:52.638 "write_zeroes": true, 00:27:52.638 "zcopy": true, 00:27:52.638 "get_zone_info": false, 00:27:52.638 "zone_management": false, 00:27:52.638 "zone_append": false, 00:27:52.638 "compare": false, 00:27:52.638 "compare_and_write": false, 00:27:52.638 "abort": true, 00:27:52.638 "seek_hole": false, 00:27:52.638 "seek_data": false, 00:27:52.638 "copy": true, 00:27:52.638 "nvme_iov_md": false 00:27:52.638 }, 00:27:52.638 "memory_domains": [ 00:27:52.638 { 00:27:52.638 "dma_device_id": "system", 00:27:52.638 "dma_device_type": 1 00:27:52.638 }, 00:27:52.638 { 00:27:52.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.638 "dma_device_type": 2 00:27:52.638 } 00:27:52.638 ], 00:27:52.638 "driver_specific": {} 00:27:52.638 } 00:27:52.638 ] 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.638 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.922 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.922 "name": "Existed_Raid", 00:27:52.922 "uuid": "347f8a79-4290-11ef-a0af-c98d8ee52a94", 00:27:52.922 "strip_size_kb": 0, 00:27:52.922 "state": "online", 00:27:52.922 "raid_level": "raid1", 00:27:52.922 "superblock": true, 00:27:52.922 "num_base_bdevs": 2, 00:27:52.922 "num_base_bdevs_discovered": 2, 00:27:52.922 "num_base_bdevs_operational": 2, 00:27:52.922 "base_bdevs_list": [ 00:27:52.922 { 00:27:52.922 "name": "BaseBdev1", 00:27:52.922 "uuid": "33a29663-4290-11ef-a0af-c98d8ee52a94", 00:27:52.922 "is_configured": true, 00:27:52.922 "data_offset": 256, 00:27:52.922 "data_size": 7936 00:27:52.922 }, 00:27:52.922 { 00:27:52.922 "name": "BaseBdev2", 00:27:52.922 "uuid": "34f076bf-4290-11ef-a0af-c98d8ee52a94", 00:27:52.922 "is_configured": true, 00:27:52.922 "data_offset": 256, 00:27:52.922 "data_size": 7936 00:27:52.922 } 00:27:52.922 ] 00:27:52.922 }' 00:27:52.922 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.922 09:54:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:53.193 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:53.193 [2024-07-15 09:54:21.287340] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:53.453 "name": "Existed_Raid", 00:27:53.453 "aliases": [ 00:27:53.453 "347f8a79-4290-11ef-a0af-c98d8ee52a94" 00:27:53.453 ], 00:27:53.453 "product_name": "Raid Volume", 00:27:53.453 "block_size": 4096, 00:27:53.453 "num_blocks": 7936, 00:27:53.453 "uuid": "347f8a79-4290-11ef-a0af-c98d8ee52a94", 00:27:53.453 "md_size": 32, 00:27:53.453 "md_interleave": false, 00:27:53.453 "dif_type": 0, 00:27:53.453 "assigned_rate_limits": { 00:27:53.453 "rw_ios_per_sec": 0, 00:27:53.453 "rw_mbytes_per_sec": 0, 00:27:53.453 "r_mbytes_per_sec": 0, 00:27:53.453 "w_mbytes_per_sec": 0 00:27:53.453 }, 00:27:53.453 "claimed": false, 00:27:53.453 "zoned": false, 00:27:53.453 "supported_io_types": { 00:27:53.453 "read": true, 00:27:53.453 "write": true, 00:27:53.453 "unmap": false, 00:27:53.453 "flush": false, 00:27:53.453 "reset": true, 00:27:53.453 "nvme_admin": false, 00:27:53.453 "nvme_io": false, 00:27:53.453 "nvme_io_md": false, 00:27:53.453 "write_zeroes": true, 00:27:53.453 "zcopy": false, 00:27:53.453 "get_zone_info": false, 00:27:53.453 "zone_management": false, 00:27:53.453 "zone_append": false, 00:27:53.453 "compare": false, 00:27:53.453 "compare_and_write": false, 00:27:53.453 "abort": false, 00:27:53.453 "seek_hole": false, 00:27:53.453 "seek_data": false, 00:27:53.453 "copy": false, 00:27:53.453 "nvme_iov_md": false 00:27:53.453 }, 00:27:53.453 "memory_domains": [ 00:27:53.453 { 00:27:53.453 "dma_device_id": "system", 00:27:53.453 "dma_device_type": 1 00:27:53.453 }, 00:27:53.453 { 00:27:53.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.453 "dma_device_type": 2 00:27:53.453 }, 00:27:53.453 { 00:27:53.453 "dma_device_id": "system", 00:27:53.453 "dma_device_type": 1 00:27:53.453 }, 00:27:53.453 { 00:27:53.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.453 "dma_device_type": 2 00:27:53.453 } 00:27:53.453 ], 00:27:53.453 "driver_specific": { 00:27:53.453 "raid": { 00:27:53.453 "uuid": "347f8a79-4290-11ef-a0af-c98d8ee52a94", 00:27:53.453 "strip_size_kb": 0, 00:27:53.453 "state": "online", 00:27:53.453 "raid_level": "raid1", 00:27:53.453 "superblock": true, 00:27:53.453 "num_base_bdevs": 2, 00:27:53.453 "num_base_bdevs_discovered": 2, 00:27:53.453 "num_base_bdevs_operational": 2, 00:27:53.453 "base_bdevs_list": [ 00:27:53.453 { 00:27:53.453 "name": "BaseBdev1", 00:27:53.453 "uuid": "33a29663-4290-11ef-a0af-c98d8ee52a94", 00:27:53.453 "is_configured": true, 00:27:53.453 "data_offset": 256, 00:27:53.453 "data_size": 7936 00:27:53.453 }, 00:27:53.453 { 00:27:53.453 "name": "BaseBdev2", 00:27:53.453 "uuid": "34f076bf-4290-11ef-a0af-c98d8ee52a94", 00:27:53.453 "is_configured": true, 00:27:53.453 "data_offset": 256, 00:27:53.453 "data_size": 7936 00:27:53.453 } 00:27:53.453 ] 00:27:53.453 } 00:27:53.453 } 00:27:53.453 }' 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:53.453 BaseBdev2' 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:53.453 "name": "BaseBdev1", 00:27:53.453 "aliases": [ 00:27:53.453 "33a29663-4290-11ef-a0af-c98d8ee52a94" 00:27:53.453 ], 00:27:53.453 "product_name": "Malloc disk", 00:27:53.453 "block_size": 4096, 00:27:53.453 "num_blocks": 8192, 00:27:53.453 "uuid": "33a29663-4290-11ef-a0af-c98d8ee52a94", 00:27:53.453 "md_size": 32, 00:27:53.453 "md_interleave": false, 00:27:53.453 "dif_type": 0, 00:27:53.453 "assigned_rate_limits": { 00:27:53.453 "rw_ios_per_sec": 0, 00:27:53.453 "rw_mbytes_per_sec": 0, 00:27:53.453 "r_mbytes_per_sec": 0, 00:27:53.453 "w_mbytes_per_sec": 0 00:27:53.453 }, 00:27:53.453 "claimed": true, 00:27:53.453 "claim_type": "exclusive_write", 00:27:53.453 "zoned": false, 00:27:53.453 "supported_io_types": { 00:27:53.453 "read": true, 00:27:53.453 "write": true, 00:27:53.453 "unmap": true, 00:27:53.453 "flush": true, 00:27:53.453 "reset": true, 00:27:53.453 "nvme_admin": false, 00:27:53.453 "nvme_io": false, 00:27:53.453 "nvme_io_md": false, 00:27:53.453 "write_zeroes": true, 00:27:53.453 "zcopy": true, 00:27:53.453 "get_zone_info": false, 00:27:53.453 "zone_management": false, 00:27:53.453 "zone_append": false, 00:27:53.453 "compare": false, 00:27:53.453 "compare_and_write": false, 00:27:53.453 "abort": true, 00:27:53.453 "seek_hole": false, 00:27:53.453 "seek_data": false, 00:27:53.453 "copy": true, 00:27:53.453 "nvme_iov_md": false 00:27:53.453 }, 00:27:53.453 "memory_domains": [ 00:27:53.453 { 00:27:53.453 "dma_device_id": "system", 00:27:53.453 "dma_device_type": 1 00:27:53.453 }, 00:27:53.453 { 00:27:53.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.453 "dma_device_type": 2 00:27:53.453 } 00:27:53.453 ], 00:27:53.453 "driver_specific": {} 00:27:53.453 }' 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:53.453 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:53.712 "name": "BaseBdev2", 00:27:53.712 "aliases": [ 00:27:53.712 "34f076bf-4290-11ef-a0af-c98d8ee52a94" 00:27:53.712 ], 00:27:53.712 "product_name": "Malloc disk", 00:27:53.712 "block_size": 4096, 00:27:53.712 "num_blocks": 8192, 00:27:53.712 "uuid": "34f076bf-4290-11ef-a0af-c98d8ee52a94", 00:27:53.712 "md_size": 32, 00:27:53.712 "md_interleave": false, 00:27:53.712 "dif_type": 0, 00:27:53.712 "assigned_rate_limits": { 00:27:53.712 "rw_ios_per_sec": 0, 00:27:53.712 "rw_mbytes_per_sec": 0, 00:27:53.712 "r_mbytes_per_sec": 0, 00:27:53.712 "w_mbytes_per_sec": 0 00:27:53.712 }, 00:27:53.712 "claimed": true, 00:27:53.712 "claim_type": "exclusive_write", 00:27:53.712 "zoned": false, 00:27:53.712 "supported_io_types": { 00:27:53.712 "read": true, 00:27:53.712 "write": true, 00:27:53.712 "unmap": true, 00:27:53.712 "flush": true, 00:27:53.712 "reset": true, 00:27:53.712 "nvme_admin": false, 00:27:53.712 "nvme_io": false, 00:27:53.712 "nvme_io_md": false, 00:27:53.712 "write_zeroes": true, 00:27:53.712 "zcopy": true, 00:27:53.712 "get_zone_info": false, 00:27:53.712 "zone_management": false, 00:27:53.712 "zone_append": false, 00:27:53.712 "compare": false, 00:27:53.712 "compare_and_write": false, 00:27:53.712 "abort": true, 00:27:53.712 "seek_hole": false, 00:27:53.712 "seek_data": false, 00:27:53.712 "copy": true, 00:27:53.712 "nvme_iov_md": false 00:27:53.712 }, 00:27:53.712 "memory_domains": [ 00:27:53.712 { 00:27:53.712 "dma_device_id": "system", 00:27:53.712 "dma_device_type": 1 00:27:53.712 }, 00:27:53.712 { 00:27:53.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.712 "dma_device_type": 2 00:27:53.712 } 00:27:53.712 ], 00:27:53.712 "driver_specific": {} 00:27:53.712 }' 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:53.712 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:53.971 09:54:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:53.971 [2024-07-15 09:54:22.047362] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.971 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.229 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.229 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.229 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.230 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.230 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.230 "name": "Existed_Raid", 00:27:54.230 "uuid": "347f8a79-4290-11ef-a0af-c98d8ee52a94", 00:27:54.230 "strip_size_kb": 0, 00:27:54.230 "state": "online", 00:27:54.230 "raid_level": "raid1", 00:27:54.230 "superblock": true, 00:27:54.230 "num_base_bdevs": 2, 00:27:54.230 "num_base_bdevs_discovered": 1, 00:27:54.230 "num_base_bdevs_operational": 1, 00:27:54.230 "base_bdevs_list": [ 00:27:54.230 { 00:27:54.230 "name": null, 00:27:54.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.230 "is_configured": false, 00:27:54.230 "data_offset": 256, 00:27:54.230 "data_size": 7936 00:27:54.230 }, 00:27:54.230 { 00:27:54.230 "name": "BaseBdev2", 00:27:54.230 "uuid": "34f076bf-4290-11ef-a0af-c98d8ee52a94", 00:27:54.230 "is_configured": true, 00:27:54.230 "data_offset": 256, 00:27:54.230 "data_size": 7936 00:27:54.230 } 00:27:54.230 ] 00:27:54.230 }' 00:27:54.230 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.230 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:54.798 09:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:55.057 [2024-07-15 09:54:22.996369] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:55.057 [2024-07-15 09:54:22.996430] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.057 [2024-07-15 09:54:23.004930] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.057 [2024-07-15 09:54:23.004943] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.057 [2024-07-15 09:54:23.004947] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d1f47a34a00 name Existed_Raid, state offline 00:27:55.057 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:55.057 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:55.057 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.057 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 65919 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 65919 ']' 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 65919 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 65919 00:27:55.315 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:27:55.316 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:27:55.316 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:27:55.316 killing process with pid 65919 00:27:55.316 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65919' 00:27:55.316 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 65919 00:27:55.316 [2024-07-15 09:54:23.278176] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:55.316 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 65919 00:27:55.316 [2024-07-15 09:54:23.278271] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:55.573 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:27:55.573 00:27:55.573 real 0m7.857s 00:27:55.573 user 0m13.165s 00:27:55.573 sys 0m1.866s 00:27:55.573 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.573 09:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.573 ************************************ 00:27:55.573 END TEST raid_state_function_test_sb_md_separate 00:27:55.573 ************************************ 00:27:55.573 09:54:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:55.573 09:54:23 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:55.573 09:54:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:55.573 09:54:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.573 09:54:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:55.573 ************************************ 00:27:55.573 START TEST raid_superblock_test_md_separate 00:27:55.573 ************************************ 00:27:55.573 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:27:55.573 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:55.573 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66189 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66189 /var/tmp/spdk-raid.sock 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66189 ']' 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.574 09:54:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.574 [2024-07-15 09:54:23.516461] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:27:55.574 [2024-07-15 09:54:23.516764] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:27:56.141 EAL: TSC is not safe to use in SMP mode 00:27:56.141 EAL: TSC is not invariant 00:27:56.141 [2024-07-15 09:54:23.948247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.141 [2024-07-15 09:54:24.066375] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:56.141 [2024-07-15 09:54:24.069121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.141 [2024-07-15 09:54:24.070040] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.141 [2024-07-15 09:54:24.070055] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:56.398 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:56.399 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:56.399 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:56.399 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:56.399 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:56.656 malloc1 00:27:56.656 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:56.913 [2024-07-15 09:54:24.788215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:56.913 [2024-07-15 09:54:24.788290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.913 [2024-07-15 09:54:24.788301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132823834780 00:27:56.913 [2024-07-15 09:54:24.788308] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.913 [2024-07-15 09:54:24.789253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.913 [2024-07-15 09:54:24.789279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:56.913 pt1 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:56.913 malloc2 00:27:56.913 09:54:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:57.171 [2024-07-15 09:54:25.196248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:57.171 [2024-07-15 09:54:25.196307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.171 [2024-07-15 09:54:25.196317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132823834c80 00:27:57.171 [2024-07-15 09:54:25.196324] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.171 [2024-07-15 09:54:25.196990] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.171 [2024-07-15 09:54:25.197028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:57.171 pt2 00:27:57.171 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:57.171 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:57.171 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:57.430 [2024-07-15 09:54:25.416303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:57.430 [2024-07-15 09:54:25.416923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:57.430 [2024-07-15 09:54:25.416982] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x132823834f00 00:27:57.430 [2024-07-15 09:54:25.417003] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:57.430 [2024-07-15 09:54:25.417059] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132823897e20 00:27:57.430 [2024-07-15 09:54:25.417092] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132823834f00 00:27:57.430 [2024-07-15 09:54:25.417096] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132823834f00 00:27:57.430 [2024-07-15 09:54:25.417112] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.430 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.687 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.687 "name": "raid_bdev1", 00:27:57.687 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:27:57.687 "strip_size_kb": 0, 00:27:57.687 "state": "online", 00:27:57.687 "raid_level": "raid1", 00:27:57.687 "superblock": true, 00:27:57.687 "num_base_bdevs": 2, 00:27:57.687 "num_base_bdevs_discovered": 2, 00:27:57.687 "num_base_bdevs_operational": 2, 00:27:57.687 "base_bdevs_list": [ 00:27:57.687 { 00:27:57.687 "name": "pt1", 00:27:57.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.687 "is_configured": true, 00:27:57.687 "data_offset": 256, 00:27:57.687 "data_size": 7936 00:27:57.687 }, 00:27:57.687 { 00:27:57.687 "name": "pt2", 00:27:57.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.687 "is_configured": true, 00:27:57.687 "data_offset": 256, 00:27:57.687 "data_size": 7936 00:27:57.687 } 00:27:57.687 ] 00:27:57.687 }' 00:27:57.687 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.687 09:54:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:57.945 09:54:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:58.203 [2024-07-15 09:54:26.112354] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.203 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:58.203 "name": "raid_bdev1", 00:27:58.203 "aliases": [ 00:27:58.203 "380223d8-4290-11ef-a0af-c98d8ee52a94" 00:27:58.203 ], 00:27:58.203 "product_name": "Raid Volume", 00:27:58.203 "block_size": 4096, 00:27:58.203 "num_blocks": 7936, 00:27:58.203 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:27:58.203 "md_size": 32, 00:27:58.203 "md_interleave": false, 00:27:58.203 "dif_type": 0, 00:27:58.203 "assigned_rate_limits": { 00:27:58.203 "rw_ios_per_sec": 0, 00:27:58.203 "rw_mbytes_per_sec": 0, 00:27:58.203 "r_mbytes_per_sec": 0, 00:27:58.203 "w_mbytes_per_sec": 0 00:27:58.203 }, 00:27:58.203 "claimed": false, 00:27:58.203 "zoned": false, 00:27:58.203 "supported_io_types": { 00:27:58.203 "read": true, 00:27:58.203 "write": true, 00:27:58.203 "unmap": false, 00:27:58.203 "flush": false, 00:27:58.203 "reset": true, 00:27:58.203 "nvme_admin": false, 00:27:58.203 "nvme_io": false, 00:27:58.203 "nvme_io_md": false, 00:27:58.203 "write_zeroes": true, 00:27:58.203 "zcopy": false, 00:27:58.203 "get_zone_info": false, 00:27:58.203 "zone_management": false, 00:27:58.203 "zone_append": false, 00:27:58.203 "compare": false, 00:27:58.204 "compare_and_write": false, 00:27:58.204 "abort": false, 00:27:58.204 "seek_hole": false, 00:27:58.204 "seek_data": false, 00:27:58.204 "copy": false, 00:27:58.204 "nvme_iov_md": false 00:27:58.204 }, 00:27:58.204 "memory_domains": [ 00:27:58.204 { 00:27:58.204 "dma_device_id": "system", 00:27:58.204 "dma_device_type": 1 00:27:58.204 }, 00:27:58.204 { 00:27:58.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.204 "dma_device_type": 2 00:27:58.204 }, 00:27:58.204 { 00:27:58.204 "dma_device_id": "system", 00:27:58.204 "dma_device_type": 1 00:27:58.204 }, 00:27:58.204 { 00:27:58.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.204 "dma_device_type": 2 00:27:58.204 } 00:27:58.204 ], 00:27:58.204 "driver_specific": { 00:27:58.204 "raid": { 00:27:58.204 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:27:58.204 "strip_size_kb": 0, 00:27:58.204 "state": "online", 00:27:58.204 "raid_level": "raid1", 00:27:58.204 "superblock": true, 00:27:58.204 "num_base_bdevs": 2, 00:27:58.204 "num_base_bdevs_discovered": 2, 00:27:58.204 "num_base_bdevs_operational": 2, 00:27:58.204 "base_bdevs_list": [ 00:27:58.204 { 00:27:58.204 "name": "pt1", 00:27:58.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.204 "is_configured": true, 00:27:58.204 "data_offset": 256, 00:27:58.204 "data_size": 7936 00:27:58.204 }, 00:27:58.204 { 00:27:58.204 "name": "pt2", 00:27:58.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.204 "is_configured": true, 00:27:58.204 "data_offset": 256, 00:27:58.204 "data_size": 7936 00:27:58.204 } 00:27:58.204 ] 00:27:58.204 } 00:27:58.204 } 00:27:58.204 }' 00:27:58.204 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:58.204 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:58.204 pt2' 00:27:58.204 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:58.204 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:58.204 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:58.464 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:58.464 "name": "pt1", 00:27:58.464 "aliases": [ 00:27:58.464 "00000000-0000-0000-0000-000000000001" 00:27:58.464 ], 00:27:58.464 "product_name": "passthru", 00:27:58.464 "block_size": 4096, 00:27:58.464 "num_blocks": 8192, 00:27:58.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:58.464 "md_size": 32, 00:27:58.464 "md_interleave": false, 00:27:58.464 "dif_type": 0, 00:27:58.464 "assigned_rate_limits": { 00:27:58.464 "rw_ios_per_sec": 0, 00:27:58.464 "rw_mbytes_per_sec": 0, 00:27:58.464 "r_mbytes_per_sec": 0, 00:27:58.464 "w_mbytes_per_sec": 0 00:27:58.464 }, 00:27:58.464 "claimed": true, 00:27:58.464 "claim_type": "exclusive_write", 00:27:58.464 "zoned": false, 00:27:58.464 "supported_io_types": { 00:27:58.464 "read": true, 00:27:58.464 "write": true, 00:27:58.464 "unmap": true, 00:27:58.464 "flush": true, 00:27:58.464 "reset": true, 00:27:58.464 "nvme_admin": false, 00:27:58.464 "nvme_io": false, 00:27:58.464 "nvme_io_md": false, 00:27:58.465 "write_zeroes": true, 00:27:58.465 "zcopy": true, 00:27:58.465 "get_zone_info": false, 00:27:58.465 "zone_management": false, 00:27:58.465 "zone_append": false, 00:27:58.465 "compare": false, 00:27:58.465 "compare_and_write": false, 00:27:58.465 "abort": true, 00:27:58.465 "seek_hole": false, 00:27:58.465 "seek_data": false, 00:27:58.465 "copy": true, 00:27:58.465 "nvme_iov_md": false 00:27:58.465 }, 00:27:58.465 "memory_domains": [ 00:27:58.465 { 00:27:58.465 "dma_device_id": "system", 00:27:58.465 "dma_device_type": 1 00:27:58.465 }, 00:27:58.465 { 00:27:58.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.465 "dma_device_type": 2 00:27:58.465 } 00:27:58.465 ], 00:27:58.465 "driver_specific": { 00:27:58.465 "passthru": { 00:27:58.465 "name": "pt1", 00:27:58.465 "base_bdev_name": "malloc1" 00:27:58.465 } 00:27:58.465 } 00:27:58.465 }' 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:58.465 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:58.763 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:58.763 "name": "pt2", 00:27:58.763 "aliases": [ 00:27:58.763 "00000000-0000-0000-0000-000000000002" 00:27:58.763 ], 00:27:58.763 "product_name": "passthru", 00:27:58.763 "block_size": 4096, 00:27:58.763 "num_blocks": 8192, 00:27:58.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.764 "md_size": 32, 00:27:58.764 "md_interleave": false, 00:27:58.764 "dif_type": 0, 00:27:58.764 "assigned_rate_limits": { 00:27:58.764 "rw_ios_per_sec": 0, 00:27:58.764 "rw_mbytes_per_sec": 0, 00:27:58.764 "r_mbytes_per_sec": 0, 00:27:58.764 "w_mbytes_per_sec": 0 00:27:58.764 }, 00:27:58.764 "claimed": true, 00:27:58.764 "claim_type": "exclusive_write", 00:27:58.764 "zoned": false, 00:27:58.764 "supported_io_types": { 00:27:58.764 "read": true, 00:27:58.764 "write": true, 00:27:58.764 "unmap": true, 00:27:58.764 "flush": true, 00:27:58.764 "reset": true, 00:27:58.764 "nvme_admin": false, 00:27:58.764 "nvme_io": false, 00:27:58.764 "nvme_io_md": false, 00:27:58.764 "write_zeroes": true, 00:27:58.764 "zcopy": true, 00:27:58.764 "get_zone_info": false, 00:27:58.764 "zone_management": false, 00:27:58.764 "zone_append": false, 00:27:58.764 "compare": false, 00:27:58.764 "compare_and_write": false, 00:27:58.764 "abort": true, 00:27:58.764 "seek_hole": false, 00:27:58.764 "seek_data": false, 00:27:58.764 "copy": true, 00:27:58.764 "nvme_iov_md": false 00:27:58.764 }, 00:27:58.764 "memory_domains": [ 00:27:58.764 { 00:27:58.764 "dma_device_id": "system", 00:27:58.764 "dma_device_type": 1 00:27:58.764 }, 00:27:58.764 { 00:27:58.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.764 "dma_device_type": 2 00:27:58.764 } 00:27:58.764 ], 00:27:58.764 "driver_specific": { 00:27:58.764 "passthru": { 00:27:58.764 "name": "pt2", 00:27:58.764 "base_bdev_name": "malloc2" 00:27:58.764 } 00:27:58.764 } 00:27:58.764 }' 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:58.764 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:58.764 [2024-07-15 09:54:26.864378] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:59.022 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=380223d8-4290-11ef-a0af-c98d8ee52a94 00:27:59.022 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 380223d8-4290-11ef-a0af-c98d8ee52a94 ']' 00:27:59.022 09:54:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:59.022 [2024-07-15 09:54:27.040354] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:59.022 [2024-07-15 09:54:27.040373] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:59.022 [2024-07-15 09:54:27.040389] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:59.022 [2024-07-15 09:54:27.040400] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:59.022 [2024-07-15 09:54:27.040404] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132823834f00 name raid_bdev1, state offline 00:27:59.022 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.022 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:59.281 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:59.281 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:59.281 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:59.281 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:59.539 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:59.539 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:59.539 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:59.539 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:59.798 09:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:00.057 [2024-07-15 09:54:28.080436] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:00.057 [2024-07-15 09:54:28.081127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:00.057 [2024-07-15 09:54:28.081153] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:00.057 [2024-07-15 09:54:28.081193] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:00.057 [2024-07-15 09:54:28.081202] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:00.057 [2024-07-15 09:54:28.081205] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132823834c80 name raid_bdev1, state configuring 00:28:00.057 request: 00:28:00.057 { 00:28:00.057 "name": "raid_bdev1", 00:28:00.057 "raid_level": "raid1", 00:28:00.057 "base_bdevs": [ 00:28:00.057 "malloc1", 00:28:00.057 "malloc2" 00:28:00.057 ], 00:28:00.057 "superblock": false, 00:28:00.057 "method": "bdev_raid_create", 00:28:00.057 "req_id": 1 00:28:00.057 } 00:28:00.057 Got JSON-RPC error response 00:28:00.057 response: 00:28:00.057 { 00:28:00.057 "code": -17, 00:28:00.057 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:00.057 } 00:28:00.057 09:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:28:00.057 09:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:00.057 09:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:00.057 09:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:00.057 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:28:00.057 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.316 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:28:00.316 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:28:00.316 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:00.574 [2024-07-15 09:54:28.500460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:00.574 [2024-07-15 09:54:28.500503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.574 [2024-07-15 09:54:28.500512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132823834780 00:28:00.574 [2024-07-15 09:54:28.500518] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.574 [2024-07-15 09:54:28.501170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.574 [2024-07-15 09:54:28.501202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:00.574 [2024-07-15 09:54:28.501220] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:00.574 [2024-07-15 09:54:28.501231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:00.574 pt1 00:28:00.574 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:00.574 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.575 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.834 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.834 "name": "raid_bdev1", 00:28:00.834 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:00.834 "strip_size_kb": 0, 00:28:00.834 "state": "configuring", 00:28:00.834 "raid_level": "raid1", 00:28:00.834 "superblock": true, 00:28:00.834 "num_base_bdevs": 2, 00:28:00.834 "num_base_bdevs_discovered": 1, 00:28:00.834 "num_base_bdevs_operational": 2, 00:28:00.834 "base_bdevs_list": [ 00:28:00.834 { 00:28:00.834 "name": "pt1", 00:28:00.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:00.834 "is_configured": true, 00:28:00.834 "data_offset": 256, 00:28:00.834 "data_size": 7936 00:28:00.834 }, 00:28:00.834 { 00:28:00.834 "name": null, 00:28:00.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:00.834 "is_configured": false, 00:28:00.834 "data_offset": 256, 00:28:00.834 "data_size": 7936 00:28:00.834 } 00:28:00.834 ] 00:28:00.834 }' 00:28:00.834 09:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.834 09:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:01.094 [2024-07-15 09:54:29.176501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:01.094 [2024-07-15 09:54:29.176554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.094 [2024-07-15 09:54:29.176563] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132823834f00 00:28:01.094 [2024-07-15 09:54:29.176569] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.094 [2024-07-15 09:54:29.176628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.094 [2024-07-15 09:54:29.176634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:01.094 [2024-07-15 09:54:29.176649] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:01.094 [2024-07-15 09:54:29.176655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:01.094 [2024-07-15 09:54:29.176669] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x132823835180 00:28:01.094 [2024-07-15 09:54:29.176672] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:01.094 [2024-07-15 09:54:29.176687] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132823897e20 00:28:01.094 [2024-07-15 09:54:29.176706] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132823835180 00:28:01.094 [2024-07-15 09:54:29.176709] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132823835180 00:28:01.094 [2024-07-15 09:54:29.176721] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:01.094 pt2 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.094 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.353 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:01.353 "name": "raid_bdev1", 00:28:01.353 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:01.353 "strip_size_kb": 0, 00:28:01.353 "state": "online", 00:28:01.353 "raid_level": "raid1", 00:28:01.353 "superblock": true, 00:28:01.353 "num_base_bdevs": 2, 00:28:01.353 "num_base_bdevs_discovered": 2, 00:28:01.353 "num_base_bdevs_operational": 2, 00:28:01.353 "base_bdevs_list": [ 00:28:01.353 { 00:28:01.353 "name": "pt1", 00:28:01.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:01.353 "is_configured": true, 00:28:01.353 "data_offset": 256, 00:28:01.353 "data_size": 7936 00:28:01.353 }, 00:28:01.353 { 00:28:01.353 "name": "pt2", 00:28:01.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:01.353 "is_configured": true, 00:28:01.353 "data_offset": 256, 00:28:01.353 "data_size": 7936 00:28:01.353 } 00:28:01.353 ] 00:28:01.353 }' 00:28:01.353 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:01.353 09:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:01.613 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:01.872 [2024-07-15 09:54:29.892582] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:01.872 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:01.872 "name": "raid_bdev1", 00:28:01.872 "aliases": [ 00:28:01.872 "380223d8-4290-11ef-a0af-c98d8ee52a94" 00:28:01.872 ], 00:28:01.872 "product_name": "Raid Volume", 00:28:01.872 "block_size": 4096, 00:28:01.872 "num_blocks": 7936, 00:28:01.872 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:01.872 "md_size": 32, 00:28:01.872 "md_interleave": false, 00:28:01.872 "dif_type": 0, 00:28:01.872 "assigned_rate_limits": { 00:28:01.872 "rw_ios_per_sec": 0, 00:28:01.872 "rw_mbytes_per_sec": 0, 00:28:01.872 "r_mbytes_per_sec": 0, 00:28:01.872 "w_mbytes_per_sec": 0 00:28:01.872 }, 00:28:01.872 "claimed": false, 00:28:01.872 "zoned": false, 00:28:01.872 "supported_io_types": { 00:28:01.872 "read": true, 00:28:01.872 "write": true, 00:28:01.872 "unmap": false, 00:28:01.872 "flush": false, 00:28:01.872 "reset": true, 00:28:01.872 "nvme_admin": false, 00:28:01.872 "nvme_io": false, 00:28:01.872 "nvme_io_md": false, 00:28:01.872 "write_zeroes": true, 00:28:01.872 "zcopy": false, 00:28:01.872 "get_zone_info": false, 00:28:01.872 "zone_management": false, 00:28:01.872 "zone_append": false, 00:28:01.872 "compare": false, 00:28:01.872 "compare_and_write": false, 00:28:01.872 "abort": false, 00:28:01.872 "seek_hole": false, 00:28:01.872 "seek_data": false, 00:28:01.872 "copy": false, 00:28:01.872 "nvme_iov_md": false 00:28:01.872 }, 00:28:01.872 "memory_domains": [ 00:28:01.872 { 00:28:01.872 "dma_device_id": "system", 00:28:01.872 "dma_device_type": 1 00:28:01.872 }, 00:28:01.872 { 00:28:01.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.872 "dma_device_type": 2 00:28:01.872 }, 00:28:01.872 { 00:28:01.872 "dma_device_id": "system", 00:28:01.872 "dma_device_type": 1 00:28:01.872 }, 00:28:01.872 { 00:28:01.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.872 "dma_device_type": 2 00:28:01.872 } 00:28:01.872 ], 00:28:01.872 "driver_specific": { 00:28:01.872 "raid": { 00:28:01.872 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:01.872 "strip_size_kb": 0, 00:28:01.872 "state": "online", 00:28:01.872 "raid_level": "raid1", 00:28:01.872 "superblock": true, 00:28:01.872 "num_base_bdevs": 2, 00:28:01.872 "num_base_bdevs_discovered": 2, 00:28:01.872 "num_base_bdevs_operational": 2, 00:28:01.872 "base_bdevs_list": [ 00:28:01.872 { 00:28:01.872 "name": "pt1", 00:28:01.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:01.872 "is_configured": true, 00:28:01.872 "data_offset": 256, 00:28:01.872 "data_size": 7936 00:28:01.872 }, 00:28:01.872 { 00:28:01.872 "name": "pt2", 00:28:01.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:01.872 "is_configured": true, 00:28:01.872 "data_offset": 256, 00:28:01.872 "data_size": 7936 00:28:01.872 } 00:28:01.872 ] 00:28:01.872 } 00:28:01.872 } 00:28:01.872 }' 00:28:01.872 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:01.872 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:01.872 pt2' 00:28:01.872 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:01.872 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:01.872 09:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:02.137 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:02.138 "name": "pt1", 00:28:02.138 "aliases": [ 00:28:02.138 "00000000-0000-0000-0000-000000000001" 00:28:02.138 ], 00:28:02.138 "product_name": "passthru", 00:28:02.138 "block_size": 4096, 00:28:02.138 "num_blocks": 8192, 00:28:02.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:02.138 "md_size": 32, 00:28:02.138 "md_interleave": false, 00:28:02.138 "dif_type": 0, 00:28:02.138 "assigned_rate_limits": { 00:28:02.138 "rw_ios_per_sec": 0, 00:28:02.138 "rw_mbytes_per_sec": 0, 00:28:02.138 "r_mbytes_per_sec": 0, 00:28:02.138 "w_mbytes_per_sec": 0 00:28:02.138 }, 00:28:02.138 "claimed": true, 00:28:02.138 "claim_type": "exclusive_write", 00:28:02.138 "zoned": false, 00:28:02.138 "supported_io_types": { 00:28:02.138 "read": true, 00:28:02.138 "write": true, 00:28:02.138 "unmap": true, 00:28:02.138 "flush": true, 00:28:02.138 "reset": true, 00:28:02.138 "nvme_admin": false, 00:28:02.138 "nvme_io": false, 00:28:02.138 "nvme_io_md": false, 00:28:02.138 "write_zeroes": true, 00:28:02.138 "zcopy": true, 00:28:02.138 "get_zone_info": false, 00:28:02.138 "zone_management": false, 00:28:02.138 "zone_append": false, 00:28:02.138 "compare": false, 00:28:02.138 "compare_and_write": false, 00:28:02.138 "abort": true, 00:28:02.138 "seek_hole": false, 00:28:02.138 "seek_data": false, 00:28:02.138 "copy": true, 00:28:02.138 "nvme_iov_md": false 00:28:02.138 }, 00:28:02.138 "memory_domains": [ 00:28:02.138 { 00:28:02.138 "dma_device_id": "system", 00:28:02.138 "dma_device_type": 1 00:28:02.138 }, 00:28:02.138 { 00:28:02.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.138 "dma_device_type": 2 00:28:02.138 } 00:28:02.138 ], 00:28:02.138 "driver_specific": { 00:28:02.138 "passthru": { 00:28:02.138 "name": "pt1", 00:28:02.138 "base_bdev_name": "malloc1" 00:28:02.138 } 00:28:02.138 } 00:28:02.138 }' 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:02.138 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:02.397 "name": "pt2", 00:28:02.397 "aliases": [ 00:28:02.397 "00000000-0000-0000-0000-000000000002" 00:28:02.397 ], 00:28:02.397 "product_name": "passthru", 00:28:02.397 "block_size": 4096, 00:28:02.397 "num_blocks": 8192, 00:28:02.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:02.397 "md_size": 32, 00:28:02.397 "md_interleave": false, 00:28:02.397 "dif_type": 0, 00:28:02.397 "assigned_rate_limits": { 00:28:02.397 "rw_ios_per_sec": 0, 00:28:02.397 "rw_mbytes_per_sec": 0, 00:28:02.397 "r_mbytes_per_sec": 0, 00:28:02.397 "w_mbytes_per_sec": 0 00:28:02.397 }, 00:28:02.397 "claimed": true, 00:28:02.397 "claim_type": "exclusive_write", 00:28:02.397 "zoned": false, 00:28:02.397 "supported_io_types": { 00:28:02.397 "read": true, 00:28:02.397 "write": true, 00:28:02.397 "unmap": true, 00:28:02.397 "flush": true, 00:28:02.397 "reset": true, 00:28:02.397 "nvme_admin": false, 00:28:02.397 "nvme_io": false, 00:28:02.397 "nvme_io_md": false, 00:28:02.397 "write_zeroes": true, 00:28:02.397 "zcopy": true, 00:28:02.397 "get_zone_info": false, 00:28:02.397 "zone_management": false, 00:28:02.397 "zone_append": false, 00:28:02.397 "compare": false, 00:28:02.397 "compare_and_write": false, 00:28:02.397 "abort": true, 00:28:02.397 "seek_hole": false, 00:28:02.397 "seek_data": false, 00:28:02.397 "copy": true, 00:28:02.397 "nvme_iov_md": false 00:28:02.397 }, 00:28:02.397 "memory_domains": [ 00:28:02.397 { 00:28:02.397 "dma_device_id": "system", 00:28:02.397 "dma_device_type": 1 00:28:02.397 }, 00:28:02.397 { 00:28:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.397 "dma_device_type": 2 00:28:02.397 } 00:28:02.397 ], 00:28:02.397 "driver_specific": { 00:28:02.397 "passthru": { 00:28:02.397 "name": "pt2", 00:28:02.397 "base_bdev_name": "malloc2" 00:28:02.397 } 00:28:02.397 } 00:28:02.397 }' 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:02.397 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:28:02.656 [2024-07-15 09:54:30.680610] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:02.656 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 380223d8-4290-11ef-a0af-c98d8ee52a94 '!=' 380223d8-4290-11ef-a0af-c98d8ee52a94 ']' 00:28:02.656 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:28:02.656 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:02.656 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:28:02.656 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:02.915 [2024-07-15 09:54:30.876595] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.915 09:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.174 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.174 "name": "raid_bdev1", 00:28:03.174 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:03.174 "strip_size_kb": 0, 00:28:03.174 "state": "online", 00:28:03.174 "raid_level": "raid1", 00:28:03.174 "superblock": true, 00:28:03.174 "num_base_bdevs": 2, 00:28:03.174 "num_base_bdevs_discovered": 1, 00:28:03.174 "num_base_bdevs_operational": 1, 00:28:03.174 "base_bdevs_list": [ 00:28:03.174 { 00:28:03.174 "name": null, 00:28:03.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.174 "is_configured": false, 00:28:03.174 "data_offset": 256, 00:28:03.174 "data_size": 7936 00:28:03.174 }, 00:28:03.174 { 00:28:03.174 "name": "pt2", 00:28:03.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:03.174 "is_configured": true, 00:28:03.174 "data_offset": 256, 00:28:03.174 "data_size": 7936 00:28:03.174 } 00:28:03.174 ] 00:28:03.174 }' 00:28:03.174 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.174 09:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.474 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:03.474 [2024-07-15 09:54:31.556634] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:03.474 [2024-07-15 09:54:31.556658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:03.474 [2024-07-15 09:54:31.556671] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:03.474 [2024-07-15 09:54:31.556680] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:03.474 [2024-07-15 09:54:31.556684] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132823835180 name raid_bdev1, state offline 00:28:03.474 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.474 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:28:03.733 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:28:03.733 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:28:03.733 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:28:03.733 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:03.733 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:03.991 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:03.991 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:03.991 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:28:03.991 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:03.991 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:28:03.991 09:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:04.250 [2024-07-15 09:54:32.152660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:04.250 [2024-07-15 09:54:32.152714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.250 [2024-07-15 09:54:32.152723] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132823834f00 00:28:04.250 [2024-07-15 09:54:32.152730] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.250 [2024-07-15 09:54:32.153469] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.250 [2024-07-15 09:54:32.153500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:04.250 [2024-07-15 09:54:32.153519] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:04.250 [2024-07-15 09:54:32.153529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:04.250 [2024-07-15 09:54:32.153543] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x132823835180 00:28:04.250 [2024-07-15 09:54:32.153546] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:04.250 [2024-07-15 09:54:32.153566] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132823897e20 00:28:04.250 [2024-07-15 09:54:32.153591] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132823835180 00:28:04.250 [2024-07-15 09:54:32.153594] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132823835180 00:28:04.250 [2024-07-15 09:54:32.153604] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.250 pt2 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.250 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.508 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.508 "name": "raid_bdev1", 00:28:04.508 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:04.508 "strip_size_kb": 0, 00:28:04.508 "state": "online", 00:28:04.508 "raid_level": "raid1", 00:28:04.508 "superblock": true, 00:28:04.508 "num_base_bdevs": 2, 00:28:04.508 "num_base_bdevs_discovered": 1, 00:28:04.508 "num_base_bdevs_operational": 1, 00:28:04.508 "base_bdevs_list": [ 00:28:04.508 { 00:28:04.508 "name": null, 00:28:04.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.508 "is_configured": false, 00:28:04.508 "data_offset": 256, 00:28:04.508 "data_size": 7936 00:28:04.508 }, 00:28:04.508 { 00:28:04.508 "name": "pt2", 00:28:04.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:04.508 "is_configured": true, 00:28:04.508 "data_offset": 256, 00:28:04.508 "data_size": 7936 00:28:04.508 } 00:28:04.508 ] 00:28:04.508 }' 00:28:04.508 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.508 09:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:04.767 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:04.767 [2024-07-15 09:54:32.844694] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:04.767 [2024-07-15 09:54:32.844718] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:04.767 [2024-07-15 09:54:32.844732] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:04.767 [2024-07-15 09:54:32.844740] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:04.767 [2024-07-15 09:54:32.844744] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132823835180 name raid_bdev1, state offline 00:28:04.767 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.767 09:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:28:05.026 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:28:05.026 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:28:05.026 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:28:05.026 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:05.283 [2024-07-15 09:54:33.236718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:05.283 [2024-07-15 09:54:33.236758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:05.283 [2024-07-15 09:54:33.236767] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132823834c80 00:28:05.283 [2024-07-15 09:54:33.236774] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:05.283 [2024-07-15 09:54:33.237446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:05.283 [2024-07-15 09:54:33.237473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:05.283 [2024-07-15 09:54:33.237491] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:05.283 [2024-07-15 09:54:33.237500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:05.283 [2024-07-15 09:54:33.237516] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:05.283 [2024-07-15 09:54:33.237520] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:05.283 [2024-07-15 09:54:33.237524] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132823834780 name raid_bdev1, state configuring 00:28:05.283 [2024-07-15 09:54:33.237530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:05.283 [2024-07-15 09:54:33.237540] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x132823834780 00:28:05.283 [2024-07-15 09:54:33.237543] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:05.283 [2024-07-15 09:54:33.237561] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132823897e20 00:28:05.283 [2024-07-15 09:54:33.237582] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132823834780 00:28:05.283 [2024-07-15 09:54:33.237590] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132823834780 00:28:05.283 [2024-07-15 09:54:33.237600] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:05.283 pt1 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.283 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.550 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.550 "name": "raid_bdev1", 00:28:05.550 "uuid": "380223d8-4290-11ef-a0af-c98d8ee52a94", 00:28:05.550 "strip_size_kb": 0, 00:28:05.550 "state": "online", 00:28:05.550 "raid_level": "raid1", 00:28:05.550 "superblock": true, 00:28:05.550 "num_base_bdevs": 2, 00:28:05.550 "num_base_bdevs_discovered": 1, 00:28:05.550 "num_base_bdevs_operational": 1, 00:28:05.550 "base_bdevs_list": [ 00:28:05.550 { 00:28:05.550 "name": null, 00:28:05.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.550 "is_configured": false, 00:28:05.550 "data_offset": 256, 00:28:05.550 "data_size": 7936 00:28:05.550 }, 00:28:05.550 { 00:28:05.550 "name": "pt2", 00:28:05.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:05.550 "is_configured": true, 00:28:05.550 "data_offset": 256, 00:28:05.550 "data_size": 7936 00:28:05.550 } 00:28:05.550 ] 00:28:05.550 }' 00:28:05.550 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.550 09:54:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.810 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:05.810 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:06.068 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:28:06.068 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:06.068 09:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:28:06.068 [2024-07-15 09:54:34.092777] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 380223d8-4290-11ef-a0af-c98d8ee52a94 '!=' 380223d8-4290-11ef-a0af-c98d8ee52a94 ']' 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66189 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66189 ']' 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 66189 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66189 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:28:06.068 killing process with pid 66189 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66189' 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 66189 00:28:06.068 [2024-07-15 09:54:34.122516] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:06.068 [2024-07-15 09:54:34.122533] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.068 [2024-07-15 09:54:34.122543] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.068 [2024-07-15 09:54:34.122546] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132823834780 name raid_bdev1, state offline 00:28:06.068 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 66189 00:28:06.068 [2024-07-15 09:54:34.139781] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:06.327 09:54:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:28:06.327 00:28:06.327 real 0m10.891s 00:28:06.327 user 0m19.140s 00:28:06.327 sys 0m1.883s 00:28:06.327 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.327 09:54:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:06.327 ************************************ 00:28:06.327 END TEST raid_superblock_test_md_separate 00:28:06.327 ************************************ 00:28:06.587 09:54:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:06.587 09:54:34 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:28:06.587 09:54:34 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:28:06.587 09:54:34 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:28:06.587 09:54:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:06.587 09:54:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.587 09:54:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:06.587 ************************************ 00:28:06.587 START TEST raid_state_function_test_sb_md_interleaved 00:28:06.587 ************************************ 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66568 00:28:06.587 Process raid pid: 66568 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66568' 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66568 /var/tmp/spdk-raid.sock 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66568 ']' 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:06.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.587 09:54:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.587 [2024-07-15 09:54:34.471172] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:06.587 [2024-07-15 09:54:34.471414] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:06.846 EAL: TSC is not safe to use in SMP mode 00:28:06.846 EAL: TSC is not invariant 00:28:06.846 [2024-07-15 09:54:34.899407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.105 [2024-07-15 09:54:35.013479] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:07.105 [2024-07-15 09:54:35.015923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.105 [2024-07-15 09:54:35.016625] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.105 [2024-07-15 09:54:35.016636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.364 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.364 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:28:07.364 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:07.623 [2024-07-15 09:54:35.567786] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:07.623 [2024-07-15 09:54:35.567852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:07.623 [2024-07-15 09:54:35.567857] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:07.623 [2024-07-15 09:54:35.567864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.623 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.882 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.882 "name": "Existed_Raid", 00:28:07.882 "uuid": "3e0f2254-4290-11ef-a0af-c98d8ee52a94", 00:28:07.882 "strip_size_kb": 0, 00:28:07.882 "state": "configuring", 00:28:07.882 "raid_level": "raid1", 00:28:07.882 "superblock": true, 00:28:07.882 "num_base_bdevs": 2, 00:28:07.882 "num_base_bdevs_discovered": 0, 00:28:07.882 "num_base_bdevs_operational": 2, 00:28:07.882 "base_bdevs_list": [ 00:28:07.882 { 00:28:07.882 "name": "BaseBdev1", 00:28:07.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.882 "is_configured": false, 00:28:07.882 "data_offset": 0, 00:28:07.882 "data_size": 0 00:28:07.882 }, 00:28:07.882 { 00:28:07.882 "name": "BaseBdev2", 00:28:07.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.882 "is_configured": false, 00:28:07.882 "data_offset": 0, 00:28:07.882 "data_size": 0 00:28:07.882 } 00:28:07.882 ] 00:28:07.882 }' 00:28:07.882 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.882 09:54:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.159 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:08.418 [2024-07-15 09:54:36.263784] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:08.418 [2024-07-15 09:54:36.263810] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f39f2234500 name Existed_Raid, state configuring 00:28:08.418 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:08.418 [2024-07-15 09:54:36.459802] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:08.418 [2024-07-15 09:54:36.459847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:08.418 [2024-07-15 09:54:36.459850] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:08.418 [2024-07-15 09:54:36.459857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:08.418 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:28:08.677 [2024-07-15 09:54:36.636871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:08.677 BaseBdev1 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:08.677 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:08.936 09:54:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:08.936 [ 00:28:08.936 { 00:28:08.936 "name": "BaseBdev1", 00:28:08.936 "aliases": [ 00:28:08.936 "3eb21ad0-4290-11ef-a0af-c98d8ee52a94" 00:28:08.936 ], 00:28:08.936 "product_name": "Malloc disk", 00:28:08.936 "block_size": 4128, 00:28:08.936 "num_blocks": 8192, 00:28:08.936 "uuid": "3eb21ad0-4290-11ef-a0af-c98d8ee52a94", 00:28:08.936 "md_size": 32, 00:28:08.936 "md_interleave": true, 00:28:08.936 "dif_type": 0, 00:28:08.936 "assigned_rate_limits": { 00:28:08.936 "rw_ios_per_sec": 0, 00:28:08.936 "rw_mbytes_per_sec": 0, 00:28:08.936 "r_mbytes_per_sec": 0, 00:28:08.936 "w_mbytes_per_sec": 0 00:28:08.936 }, 00:28:08.936 "claimed": true, 00:28:08.936 "claim_type": "exclusive_write", 00:28:08.936 "zoned": false, 00:28:08.936 "supported_io_types": { 00:28:08.936 "read": true, 00:28:08.936 "write": true, 00:28:08.936 "unmap": true, 00:28:08.936 "flush": true, 00:28:08.936 "reset": true, 00:28:08.936 "nvme_admin": false, 00:28:08.936 "nvme_io": false, 00:28:08.936 "nvme_io_md": false, 00:28:08.936 "write_zeroes": true, 00:28:08.936 "zcopy": true, 00:28:08.936 "get_zone_info": false, 00:28:08.936 "zone_management": false, 00:28:08.936 "zone_append": false, 00:28:08.936 "compare": false, 00:28:08.936 "compare_and_write": false, 00:28:08.936 "abort": true, 00:28:08.936 "seek_hole": false, 00:28:08.936 "seek_data": false, 00:28:08.936 "copy": true, 00:28:08.936 "nvme_iov_md": false 00:28:08.936 }, 00:28:08.936 "memory_domains": [ 00:28:08.936 { 00:28:08.936 "dma_device_id": "system", 00:28:08.936 "dma_device_type": 1 00:28:08.936 }, 00:28:08.936 { 00:28:08.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.937 "dma_device_type": 2 00:28:08.937 } 00:28:08.937 ], 00:28:08.937 "driver_specific": {} 00:28:08.937 } 00:28:08.937 ] 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.937 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.196 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.196 "name": "Existed_Raid", 00:28:09.196 "uuid": "3e973ee0-4290-11ef-a0af-c98d8ee52a94", 00:28:09.196 "strip_size_kb": 0, 00:28:09.196 "state": "configuring", 00:28:09.196 "raid_level": "raid1", 00:28:09.196 "superblock": true, 00:28:09.196 "num_base_bdevs": 2, 00:28:09.196 "num_base_bdevs_discovered": 1, 00:28:09.196 "num_base_bdevs_operational": 2, 00:28:09.196 "base_bdevs_list": [ 00:28:09.196 { 00:28:09.196 "name": "BaseBdev1", 00:28:09.196 "uuid": "3eb21ad0-4290-11ef-a0af-c98d8ee52a94", 00:28:09.196 "is_configured": true, 00:28:09.196 "data_offset": 256, 00:28:09.196 "data_size": 7936 00:28:09.196 }, 00:28:09.196 { 00:28:09.196 "name": "BaseBdev2", 00:28:09.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.196 "is_configured": false, 00:28:09.196 "data_offset": 0, 00:28:09.196 "data_size": 0 00:28:09.196 } 00:28:09.196 ] 00:28:09.196 }' 00:28:09.196 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.196 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.454 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:09.713 [2024-07-15 09:54:37.703861] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:09.713 [2024-07-15 09:54:37.703890] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f39f2234500 name Existed_Raid, state configuring 00:28:09.713 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:09.997 [2024-07-15 09:54:37.911875] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:09.997 [2024-07-15 09:54:37.912750] bdev.c:8183:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:09.997 [2024-07-15 09:54:37.912820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.997 09:54:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.255 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.255 "name": "Existed_Raid", 00:28:10.255 "uuid": "3f74d061-4290-11ef-a0af-c98d8ee52a94", 00:28:10.255 "strip_size_kb": 0, 00:28:10.255 "state": "configuring", 00:28:10.255 "raid_level": "raid1", 00:28:10.255 "superblock": true, 00:28:10.255 "num_base_bdevs": 2, 00:28:10.255 "num_base_bdevs_discovered": 1, 00:28:10.255 "num_base_bdevs_operational": 2, 00:28:10.255 "base_bdevs_list": [ 00:28:10.255 { 00:28:10.255 "name": "BaseBdev1", 00:28:10.255 "uuid": "3eb21ad0-4290-11ef-a0af-c98d8ee52a94", 00:28:10.255 "is_configured": true, 00:28:10.255 "data_offset": 256, 00:28:10.255 "data_size": 7936 00:28:10.255 }, 00:28:10.255 { 00:28:10.255 "name": "BaseBdev2", 00:28:10.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.255 "is_configured": false, 00:28:10.255 "data_offset": 0, 00:28:10.255 "data_size": 0 00:28:10.255 } 00:28:10.255 ] 00:28:10.255 }' 00:28:10.255 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.255 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.513 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:28:10.772 [2024-07-15 09:54:38.643993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.772 [2024-07-15 09:54:38.644048] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f39f2234a00 00:28:10.772 [2024-07-15 09:54:38.644053] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:10.772 [2024-07-15 09:54:38.644070] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f39f2297e20 00:28:10.772 [2024-07-15 09:54:38.644082] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f39f2234a00 00:28:10.772 [2024-07-15 09:54:38.644086] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f39f2234a00 00:28:10.772 [2024-07-15 09:54:38.644096] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.772 BaseBdev2 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:10.772 09:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:11.031 [ 00:28:11.031 { 00:28:11.031 "name": "BaseBdev2", 00:28:11.031 "aliases": [ 00:28:11.031 "3fe4842b-4290-11ef-a0af-c98d8ee52a94" 00:28:11.031 ], 00:28:11.031 "product_name": "Malloc disk", 00:28:11.031 "block_size": 4128, 00:28:11.031 "num_blocks": 8192, 00:28:11.031 "uuid": "3fe4842b-4290-11ef-a0af-c98d8ee52a94", 00:28:11.031 "md_size": 32, 00:28:11.031 "md_interleave": true, 00:28:11.031 "dif_type": 0, 00:28:11.031 "assigned_rate_limits": { 00:28:11.031 "rw_ios_per_sec": 0, 00:28:11.031 "rw_mbytes_per_sec": 0, 00:28:11.031 "r_mbytes_per_sec": 0, 00:28:11.031 "w_mbytes_per_sec": 0 00:28:11.031 }, 00:28:11.031 "claimed": true, 00:28:11.031 "claim_type": "exclusive_write", 00:28:11.031 "zoned": false, 00:28:11.031 "supported_io_types": { 00:28:11.031 "read": true, 00:28:11.031 "write": true, 00:28:11.031 "unmap": true, 00:28:11.031 "flush": true, 00:28:11.031 "reset": true, 00:28:11.031 "nvme_admin": false, 00:28:11.031 "nvme_io": false, 00:28:11.031 "nvme_io_md": false, 00:28:11.031 "write_zeroes": true, 00:28:11.031 "zcopy": true, 00:28:11.031 "get_zone_info": false, 00:28:11.031 "zone_management": false, 00:28:11.031 "zone_append": false, 00:28:11.031 "compare": false, 00:28:11.031 "compare_and_write": false, 00:28:11.031 "abort": true, 00:28:11.031 "seek_hole": false, 00:28:11.031 "seek_data": false, 00:28:11.031 "copy": true, 00:28:11.031 "nvme_iov_md": false 00:28:11.031 }, 00:28:11.031 "memory_domains": [ 00:28:11.031 { 00:28:11.031 "dma_device_id": "system", 00:28:11.031 "dma_device_type": 1 00:28:11.031 }, 00:28:11.031 { 00:28:11.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.031 "dma_device_type": 2 00:28:11.031 } 00:28:11.031 ], 00:28:11.031 "driver_specific": {} 00:28:11.031 } 00:28:11.031 ] 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.031 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.292 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:11.292 "name": "Existed_Raid", 00:28:11.292 "uuid": "3f74d061-4290-11ef-a0af-c98d8ee52a94", 00:28:11.292 "strip_size_kb": 0, 00:28:11.292 "state": "online", 00:28:11.292 "raid_level": "raid1", 00:28:11.292 "superblock": true, 00:28:11.292 "num_base_bdevs": 2, 00:28:11.292 "num_base_bdevs_discovered": 2, 00:28:11.292 "num_base_bdevs_operational": 2, 00:28:11.292 "base_bdevs_list": [ 00:28:11.292 { 00:28:11.292 "name": "BaseBdev1", 00:28:11.292 "uuid": "3eb21ad0-4290-11ef-a0af-c98d8ee52a94", 00:28:11.292 "is_configured": true, 00:28:11.292 "data_offset": 256, 00:28:11.292 "data_size": 7936 00:28:11.292 }, 00:28:11.292 { 00:28:11.292 "name": "BaseBdev2", 00:28:11.292 "uuid": "3fe4842b-4290-11ef-a0af-c98d8ee52a94", 00:28:11.292 "is_configured": true, 00:28:11.292 "data_offset": 256, 00:28:11.292 "data_size": 7936 00:28:11.292 } 00:28:11.292 ] 00:28:11.292 }' 00:28:11.292 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:11.292 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:11.550 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:11.809 [2024-07-15 09:54:39.775997] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.809 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:11.809 "name": "Existed_Raid", 00:28:11.809 "aliases": [ 00:28:11.809 "3f74d061-4290-11ef-a0af-c98d8ee52a94" 00:28:11.809 ], 00:28:11.809 "product_name": "Raid Volume", 00:28:11.809 "block_size": 4128, 00:28:11.809 "num_blocks": 7936, 00:28:11.809 "uuid": "3f74d061-4290-11ef-a0af-c98d8ee52a94", 00:28:11.809 "md_size": 32, 00:28:11.809 "md_interleave": true, 00:28:11.809 "dif_type": 0, 00:28:11.809 "assigned_rate_limits": { 00:28:11.809 "rw_ios_per_sec": 0, 00:28:11.809 "rw_mbytes_per_sec": 0, 00:28:11.809 "r_mbytes_per_sec": 0, 00:28:11.809 "w_mbytes_per_sec": 0 00:28:11.809 }, 00:28:11.809 "claimed": false, 00:28:11.809 "zoned": false, 00:28:11.809 "supported_io_types": { 00:28:11.809 "read": true, 00:28:11.809 "write": true, 00:28:11.809 "unmap": false, 00:28:11.809 "flush": false, 00:28:11.809 "reset": true, 00:28:11.809 "nvme_admin": false, 00:28:11.809 "nvme_io": false, 00:28:11.809 "nvme_io_md": false, 00:28:11.809 "write_zeroes": true, 00:28:11.809 "zcopy": false, 00:28:11.809 "get_zone_info": false, 00:28:11.809 "zone_management": false, 00:28:11.809 "zone_append": false, 00:28:11.809 "compare": false, 00:28:11.809 "compare_and_write": false, 00:28:11.809 "abort": false, 00:28:11.809 "seek_hole": false, 00:28:11.809 "seek_data": false, 00:28:11.809 "copy": false, 00:28:11.809 "nvme_iov_md": false 00:28:11.809 }, 00:28:11.809 "memory_domains": [ 00:28:11.809 { 00:28:11.809 "dma_device_id": "system", 00:28:11.809 "dma_device_type": 1 00:28:11.809 }, 00:28:11.809 { 00:28:11.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.809 "dma_device_type": 2 00:28:11.809 }, 00:28:11.809 { 00:28:11.809 "dma_device_id": "system", 00:28:11.809 "dma_device_type": 1 00:28:11.809 }, 00:28:11.809 { 00:28:11.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.809 "dma_device_type": 2 00:28:11.809 } 00:28:11.809 ], 00:28:11.809 "driver_specific": { 00:28:11.809 "raid": { 00:28:11.809 "uuid": "3f74d061-4290-11ef-a0af-c98d8ee52a94", 00:28:11.809 "strip_size_kb": 0, 00:28:11.809 "state": "online", 00:28:11.809 "raid_level": "raid1", 00:28:11.809 "superblock": true, 00:28:11.809 "num_base_bdevs": 2, 00:28:11.809 "num_base_bdevs_discovered": 2, 00:28:11.809 "num_base_bdevs_operational": 2, 00:28:11.809 "base_bdevs_list": [ 00:28:11.809 { 00:28:11.809 "name": "BaseBdev1", 00:28:11.809 "uuid": "3eb21ad0-4290-11ef-a0af-c98d8ee52a94", 00:28:11.809 "is_configured": true, 00:28:11.809 "data_offset": 256, 00:28:11.809 "data_size": 7936 00:28:11.809 }, 00:28:11.809 { 00:28:11.809 "name": "BaseBdev2", 00:28:11.809 "uuid": "3fe4842b-4290-11ef-a0af-c98d8ee52a94", 00:28:11.809 "is_configured": true, 00:28:11.809 "data_offset": 256, 00:28:11.810 "data_size": 7936 00:28:11.810 } 00:28:11.810 ] 00:28:11.810 } 00:28:11.810 } 00:28:11.810 }' 00:28:11.810 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:11.810 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:11.810 BaseBdev2' 00:28:11.810 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:11.810 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:11.810 09:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:12.068 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:12.068 "name": "BaseBdev1", 00:28:12.068 "aliases": [ 00:28:12.068 "3eb21ad0-4290-11ef-a0af-c98d8ee52a94" 00:28:12.068 ], 00:28:12.068 "product_name": "Malloc disk", 00:28:12.068 "block_size": 4128, 00:28:12.068 "num_blocks": 8192, 00:28:12.068 "uuid": "3eb21ad0-4290-11ef-a0af-c98d8ee52a94", 00:28:12.068 "md_size": 32, 00:28:12.068 "md_interleave": true, 00:28:12.068 "dif_type": 0, 00:28:12.068 "assigned_rate_limits": { 00:28:12.068 "rw_ios_per_sec": 0, 00:28:12.068 "rw_mbytes_per_sec": 0, 00:28:12.068 "r_mbytes_per_sec": 0, 00:28:12.068 "w_mbytes_per_sec": 0 00:28:12.068 }, 00:28:12.068 "claimed": true, 00:28:12.068 "claim_type": "exclusive_write", 00:28:12.068 "zoned": false, 00:28:12.068 "supported_io_types": { 00:28:12.068 "read": true, 00:28:12.068 "write": true, 00:28:12.068 "unmap": true, 00:28:12.068 "flush": true, 00:28:12.068 "reset": true, 00:28:12.068 "nvme_admin": false, 00:28:12.068 "nvme_io": false, 00:28:12.068 "nvme_io_md": false, 00:28:12.068 "write_zeroes": true, 00:28:12.069 "zcopy": true, 00:28:12.069 "get_zone_info": false, 00:28:12.069 "zone_management": false, 00:28:12.069 "zone_append": false, 00:28:12.069 "compare": false, 00:28:12.069 "compare_and_write": false, 00:28:12.069 "abort": true, 00:28:12.069 "seek_hole": false, 00:28:12.069 "seek_data": false, 00:28:12.069 "copy": true, 00:28:12.069 "nvme_iov_md": false 00:28:12.069 }, 00:28:12.069 "memory_domains": [ 00:28:12.069 { 00:28:12.069 "dma_device_id": "system", 00:28:12.069 "dma_device_type": 1 00:28:12.069 }, 00:28:12.069 { 00:28:12.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.069 "dma_device_type": 2 00:28:12.069 } 00:28:12.069 ], 00:28:12.069 "driver_specific": {} 00:28:12.069 }' 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:12.069 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:12.328 "name": "BaseBdev2", 00:28:12.328 "aliases": [ 00:28:12.328 "3fe4842b-4290-11ef-a0af-c98d8ee52a94" 00:28:12.328 ], 00:28:12.328 "product_name": "Malloc disk", 00:28:12.328 "block_size": 4128, 00:28:12.328 "num_blocks": 8192, 00:28:12.328 "uuid": "3fe4842b-4290-11ef-a0af-c98d8ee52a94", 00:28:12.328 "md_size": 32, 00:28:12.328 "md_interleave": true, 00:28:12.328 "dif_type": 0, 00:28:12.328 "assigned_rate_limits": { 00:28:12.328 "rw_ios_per_sec": 0, 00:28:12.328 "rw_mbytes_per_sec": 0, 00:28:12.328 "r_mbytes_per_sec": 0, 00:28:12.328 "w_mbytes_per_sec": 0 00:28:12.328 }, 00:28:12.328 "claimed": true, 00:28:12.328 "claim_type": "exclusive_write", 00:28:12.328 "zoned": false, 00:28:12.328 "supported_io_types": { 00:28:12.328 "read": true, 00:28:12.328 "write": true, 00:28:12.328 "unmap": true, 00:28:12.328 "flush": true, 00:28:12.328 "reset": true, 00:28:12.328 "nvme_admin": false, 00:28:12.328 "nvme_io": false, 00:28:12.328 "nvme_io_md": false, 00:28:12.328 "write_zeroes": true, 00:28:12.328 "zcopy": true, 00:28:12.328 "get_zone_info": false, 00:28:12.328 "zone_management": false, 00:28:12.328 "zone_append": false, 00:28:12.328 "compare": false, 00:28:12.328 "compare_and_write": false, 00:28:12.328 "abort": true, 00:28:12.328 "seek_hole": false, 00:28:12.328 "seek_data": false, 00:28:12.328 "copy": true, 00:28:12.328 "nvme_iov_md": false 00:28:12.328 }, 00:28:12.328 "memory_domains": [ 00:28:12.328 { 00:28:12.328 "dma_device_id": "system", 00:28:12.328 "dma_device_type": 1 00:28:12.328 }, 00:28:12.328 { 00:28:12.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.328 "dma_device_type": 2 00:28:12.328 } 00:28:12.328 ], 00:28:12.328 "driver_specific": {} 00:28:12.328 }' 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:12.328 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:12.586 [2024-07-15 09:54:40.624022] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.586 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.844 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.844 "name": "Existed_Raid", 00:28:12.844 "uuid": "3f74d061-4290-11ef-a0af-c98d8ee52a94", 00:28:12.844 "strip_size_kb": 0, 00:28:12.844 "state": "online", 00:28:12.844 "raid_level": "raid1", 00:28:12.844 "superblock": true, 00:28:12.844 "num_base_bdevs": 2, 00:28:12.844 "num_base_bdevs_discovered": 1, 00:28:12.844 "num_base_bdevs_operational": 1, 00:28:12.844 "base_bdevs_list": [ 00:28:12.844 { 00:28:12.844 "name": null, 00:28:12.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.844 "is_configured": false, 00:28:12.844 "data_offset": 256, 00:28:12.844 "data_size": 7936 00:28:12.844 }, 00:28:12.844 { 00:28:12.844 "name": "BaseBdev2", 00:28:12.844 "uuid": "3fe4842b-4290-11ef-a0af-c98d8ee52a94", 00:28:12.844 "is_configured": true, 00:28:12.844 "data_offset": 256, 00:28:12.844 "data_size": 7936 00:28:12.844 } 00:28:12.844 ] 00:28:12.844 }' 00:28:12.844 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.844 09:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.101 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:13.101 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:13.101 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.101 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:13.359 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:13.359 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:13.359 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:13.618 [2024-07-15 09:54:41.564727] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:13.618 [2024-07-15 09:54:41.564785] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:13.618 [2024-07-15 09:54:41.573548] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:13.618 [2024-07-15 09:54:41.573565] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:13.618 [2024-07-15 09:54:41.573569] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f39f2234a00 name Existed_Raid, state offline 00:28:13.618 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:13.618 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:13.618 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.618 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66568 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66568 ']' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66568 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66568 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:28:13.877 killing process with pid 66568 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66568' 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66568 00:28:13.877 [2024-07-15 09:54:41.794918] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:13.877 09:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66568 00:28:13.877 [2024-07-15 09:54:41.794962] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:14.136 09:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:28:14.136 00:28:14.136 real 0m7.600s 00:28:14.136 user 0m12.926s 00:28:14.136 sys 0m1.473s 00:28:14.136 09:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.136 09:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.136 ************************************ 00:28:14.136 END TEST raid_state_function_test_sb_md_interleaved 00:28:14.136 ************************************ 00:28:14.136 09:54:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:14.136 09:54:42 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:14.136 09:54:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:14.136 09:54:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.136 09:54:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:14.136 ************************************ 00:28:14.136 START TEST raid_superblock_test_md_interleaved 00:28:14.136 ************************************ 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=66834 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 66834 /var/tmp/spdk-raid.sock 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66834 ']' 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.136 09:54:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.136 [2024-07-15 09:54:42.125842] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:14.136 [2024-07-15 09:54:42.126148] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:14.704 EAL: TSC is not safe to use in SMP mode 00:28:14.704 EAL: TSC is not invariant 00:28:14.704 [2024-07-15 09:54:42.560926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.704 [2024-07-15 09:54:42.675216] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:14.704 [2024-07-15 09:54:42.677737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.704 [2024-07-15 09:54:42.678479] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:14.704 [2024-07-15 09:54:42.678490] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:15.272 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.272 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:28:15.272 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:28:15.272 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:15.273 malloc1 00:28:15.273 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:15.531 [2024-07-15 09:54:43.485643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:15.531 [2024-07-15 09:54:43.485710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.531 [2024-07-15 09:54:43.485720] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a598434780 00:28:15.531 [2024-07-15 09:54:43.485727] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.531 [2024-07-15 09:54:43.486585] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.531 [2024-07-15 09:54:43.486615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:15.531 pt1 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:15.531 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:15.790 malloc2 00:28:15.790 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:15.790 [2024-07-15 09:54:43.877667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:15.790 [2024-07-15 09:54:43.877739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.790 [2024-07-15 09:54:43.877750] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a598434c80 00:28:15.790 [2024-07-15 09:54:43.877756] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.790 [2024-07-15 09:54:43.878363] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.790 [2024-07-15 09:54:43.878392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:15.790 pt2 00:28:16.049 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:16.049 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:16.049 09:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:16.049 [2024-07-15 09:54:44.085679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:16.049 [2024-07-15 09:54:44.086245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:16.049 [2024-07-15 09:54:44.086305] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a598434f00 00:28:16.049 [2024-07-15 09:54:44.086312] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:16.049 [2024-07-15 09:54:44.086352] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a598497e20 00:28:16.049 [2024-07-15 09:54:44.086367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a598434f00 00:28:16.049 [2024-07-15 09:54:44.086371] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9a598434f00 00:28:16.049 [2024-07-15 09:54:44.086384] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.049 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.307 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.307 "name": "raid_bdev1", 00:28:16.307 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:16.307 "strip_size_kb": 0, 00:28:16.307 "state": "online", 00:28:16.307 "raid_level": "raid1", 00:28:16.307 "superblock": true, 00:28:16.307 "num_base_bdevs": 2, 00:28:16.307 "num_base_bdevs_discovered": 2, 00:28:16.307 "num_base_bdevs_operational": 2, 00:28:16.307 "base_bdevs_list": [ 00:28:16.307 { 00:28:16.307 "name": "pt1", 00:28:16.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:16.307 "is_configured": true, 00:28:16.307 "data_offset": 256, 00:28:16.307 "data_size": 7936 00:28:16.307 }, 00:28:16.307 { 00:28:16.307 "name": "pt2", 00:28:16.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.307 "is_configured": true, 00:28:16.307 "data_offset": 256, 00:28:16.307 "data_size": 7936 00:28:16.307 } 00:28:16.307 ] 00:28:16.307 }' 00:28:16.307 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.307 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:16.566 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:16.826 [2024-07-15 09:54:44.809719] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:16.826 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:16.826 "name": "raid_bdev1", 00:28:16.826 "aliases": [ 00:28:16.826 "4322dc88-4290-11ef-a0af-c98d8ee52a94" 00:28:16.826 ], 00:28:16.826 "product_name": "Raid Volume", 00:28:16.826 "block_size": 4128, 00:28:16.826 "num_blocks": 7936, 00:28:16.826 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:16.826 "md_size": 32, 00:28:16.826 "md_interleave": true, 00:28:16.826 "dif_type": 0, 00:28:16.826 "assigned_rate_limits": { 00:28:16.826 "rw_ios_per_sec": 0, 00:28:16.826 "rw_mbytes_per_sec": 0, 00:28:16.826 "r_mbytes_per_sec": 0, 00:28:16.826 "w_mbytes_per_sec": 0 00:28:16.826 }, 00:28:16.826 "claimed": false, 00:28:16.826 "zoned": false, 00:28:16.826 "supported_io_types": { 00:28:16.826 "read": true, 00:28:16.826 "write": true, 00:28:16.826 "unmap": false, 00:28:16.826 "flush": false, 00:28:16.826 "reset": true, 00:28:16.826 "nvme_admin": false, 00:28:16.826 "nvme_io": false, 00:28:16.826 "nvme_io_md": false, 00:28:16.827 "write_zeroes": true, 00:28:16.827 "zcopy": false, 00:28:16.827 "get_zone_info": false, 00:28:16.827 "zone_management": false, 00:28:16.827 "zone_append": false, 00:28:16.827 "compare": false, 00:28:16.827 "compare_and_write": false, 00:28:16.827 "abort": false, 00:28:16.827 "seek_hole": false, 00:28:16.827 "seek_data": false, 00:28:16.827 "copy": false, 00:28:16.827 "nvme_iov_md": false 00:28:16.827 }, 00:28:16.827 "memory_domains": [ 00:28:16.827 { 00:28:16.827 "dma_device_id": "system", 00:28:16.827 "dma_device_type": 1 00:28:16.827 }, 00:28:16.827 { 00:28:16.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.827 "dma_device_type": 2 00:28:16.827 }, 00:28:16.827 { 00:28:16.827 "dma_device_id": "system", 00:28:16.827 "dma_device_type": 1 00:28:16.827 }, 00:28:16.827 { 00:28:16.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.827 "dma_device_type": 2 00:28:16.827 } 00:28:16.827 ], 00:28:16.827 "driver_specific": { 00:28:16.827 "raid": { 00:28:16.827 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:16.827 "strip_size_kb": 0, 00:28:16.827 "state": "online", 00:28:16.827 "raid_level": "raid1", 00:28:16.827 "superblock": true, 00:28:16.827 "num_base_bdevs": 2, 00:28:16.827 "num_base_bdevs_discovered": 2, 00:28:16.827 "num_base_bdevs_operational": 2, 00:28:16.827 "base_bdevs_list": [ 00:28:16.827 { 00:28:16.827 "name": "pt1", 00:28:16.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:16.827 "is_configured": true, 00:28:16.827 "data_offset": 256, 00:28:16.827 "data_size": 7936 00:28:16.827 }, 00:28:16.827 { 00:28:16.827 "name": "pt2", 00:28:16.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.827 "is_configured": true, 00:28:16.827 "data_offset": 256, 00:28:16.827 "data_size": 7936 00:28:16.827 } 00:28:16.827 ] 00:28:16.827 } 00:28:16.827 } 00:28:16.827 }' 00:28:16.827 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:16.827 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:16.827 pt2' 00:28:16.827 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:16.827 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:16.827 09:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:17.085 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:17.085 "name": "pt1", 00:28:17.085 "aliases": [ 00:28:17.085 "00000000-0000-0000-0000-000000000001" 00:28:17.085 ], 00:28:17.085 "product_name": "passthru", 00:28:17.085 "block_size": 4128, 00:28:17.085 "num_blocks": 8192, 00:28:17.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:17.085 "md_size": 32, 00:28:17.085 "md_interleave": true, 00:28:17.085 "dif_type": 0, 00:28:17.085 "assigned_rate_limits": { 00:28:17.085 "rw_ios_per_sec": 0, 00:28:17.085 "rw_mbytes_per_sec": 0, 00:28:17.085 "r_mbytes_per_sec": 0, 00:28:17.085 "w_mbytes_per_sec": 0 00:28:17.085 }, 00:28:17.085 "claimed": true, 00:28:17.085 "claim_type": "exclusive_write", 00:28:17.085 "zoned": false, 00:28:17.085 "supported_io_types": { 00:28:17.085 "read": true, 00:28:17.085 "write": true, 00:28:17.085 "unmap": true, 00:28:17.085 "flush": true, 00:28:17.085 "reset": true, 00:28:17.085 "nvme_admin": false, 00:28:17.085 "nvme_io": false, 00:28:17.085 "nvme_io_md": false, 00:28:17.085 "write_zeroes": true, 00:28:17.085 "zcopy": true, 00:28:17.085 "get_zone_info": false, 00:28:17.085 "zone_management": false, 00:28:17.085 "zone_append": false, 00:28:17.085 "compare": false, 00:28:17.085 "compare_and_write": false, 00:28:17.085 "abort": true, 00:28:17.085 "seek_hole": false, 00:28:17.085 "seek_data": false, 00:28:17.085 "copy": true, 00:28:17.085 "nvme_iov_md": false 00:28:17.085 }, 00:28:17.085 "memory_domains": [ 00:28:17.085 { 00:28:17.085 "dma_device_id": "system", 00:28:17.085 "dma_device_type": 1 00:28:17.085 }, 00:28:17.085 { 00:28:17.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.085 "dma_device_type": 2 00:28:17.085 } 00:28:17.085 ], 00:28:17.085 "driver_specific": { 00:28:17.086 "passthru": { 00:28:17.086 "name": "pt1", 00:28:17.086 "base_bdev_name": "malloc1" 00:28:17.086 } 00:28:17.086 } 00:28:17.086 }' 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:17.086 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:17.344 "name": "pt2", 00:28:17.344 "aliases": [ 00:28:17.344 "00000000-0000-0000-0000-000000000002" 00:28:17.344 ], 00:28:17.344 "product_name": "passthru", 00:28:17.344 "block_size": 4128, 00:28:17.344 "num_blocks": 8192, 00:28:17.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:17.344 "md_size": 32, 00:28:17.344 "md_interleave": true, 00:28:17.344 "dif_type": 0, 00:28:17.344 "assigned_rate_limits": { 00:28:17.344 "rw_ios_per_sec": 0, 00:28:17.344 "rw_mbytes_per_sec": 0, 00:28:17.344 "r_mbytes_per_sec": 0, 00:28:17.344 "w_mbytes_per_sec": 0 00:28:17.344 }, 00:28:17.344 "claimed": true, 00:28:17.344 "claim_type": "exclusive_write", 00:28:17.344 "zoned": false, 00:28:17.344 "supported_io_types": { 00:28:17.344 "read": true, 00:28:17.344 "write": true, 00:28:17.344 "unmap": true, 00:28:17.344 "flush": true, 00:28:17.344 "reset": true, 00:28:17.344 "nvme_admin": false, 00:28:17.344 "nvme_io": false, 00:28:17.344 "nvme_io_md": false, 00:28:17.344 "write_zeroes": true, 00:28:17.344 "zcopy": true, 00:28:17.344 "get_zone_info": false, 00:28:17.344 "zone_management": false, 00:28:17.344 "zone_append": false, 00:28:17.344 "compare": false, 00:28:17.344 "compare_and_write": false, 00:28:17.344 "abort": true, 00:28:17.344 "seek_hole": false, 00:28:17.344 "seek_data": false, 00:28:17.344 "copy": true, 00:28:17.344 "nvme_iov_md": false 00:28:17.344 }, 00:28:17.344 "memory_domains": [ 00:28:17.344 { 00:28:17.344 "dma_device_id": "system", 00:28:17.344 "dma_device_type": 1 00:28:17.344 }, 00:28:17.344 { 00:28:17.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.344 "dma_device_type": 2 00:28:17.344 } 00:28:17.344 ], 00:28:17.344 "driver_specific": { 00:28:17.344 "passthru": { 00:28:17.344 "name": "pt2", 00:28:17.344 "base_bdev_name": "malloc2" 00:28:17.344 } 00:28:17.344 } 00:28:17.344 }' 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.344 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:17.345 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:28:17.705 [2024-07-15 09:54:45.621755] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.705 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4322dc88-4290-11ef-a0af-c98d8ee52a94 00:28:17.705 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 4322dc88-4290-11ef-a0af-c98d8ee52a94 ']' 00:28:17.705 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:17.961 [2024-07-15 09:54:45.821741] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.961 [2024-07-15 09:54:45.821766] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.961 [2024-07-15 09:54:45.821782] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.961 [2024-07-15 09:54:45.821794] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.961 [2024-07-15 09:54:45.821797] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a598434f00 name raid_bdev1, state offline 00:28:17.961 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.961 09:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:28:17.961 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:28:17.961 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:28:17.961 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:28:17.961 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:18.217 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:28:18.217 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:18.473 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:18.473 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:18.730 [2024-07-15 09:54:46.765798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:18.730 [2024-07-15 09:54:46.766473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:18.730 [2024-07-15 09:54:46.766498] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:18.730 [2024-07-15 09:54:46.766537] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:18.730 [2024-07-15 09:54:46.766546] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:18.730 [2024-07-15 09:54:46.766550] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a598434c80 name raid_bdev1, state configuring 00:28:18.730 request: 00:28:18.730 { 00:28:18.730 "name": "raid_bdev1", 00:28:18.730 "raid_level": "raid1", 00:28:18.730 "base_bdevs": [ 00:28:18.730 "malloc1", 00:28:18.730 "malloc2" 00:28:18.730 ], 00:28:18.730 "superblock": false, 00:28:18.730 "method": "bdev_raid_create", 00:28:18.730 "req_id": 1 00:28:18.730 } 00:28:18.730 Got JSON-RPC error response 00:28:18.730 response: 00:28:18.730 { 00:28:18.730 "code": -17, 00:28:18.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:18.730 } 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.730 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:28:18.988 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:28:18.988 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:28:18.988 09:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:19.245 [2024-07-15 09:54:47.165814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:19.245 [2024-07-15 09:54:47.165877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.245 [2024-07-15 09:54:47.165886] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a598434780 00:28:19.245 [2024-07-15 09:54:47.165892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.245 [2024-07-15 09:54:47.166546] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.245 [2024-07-15 09:54:47.166572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:19.245 [2024-07-15 09:54:47.166586] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:19.245 [2024-07-15 09:54:47.166597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:19.245 pt1 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.245 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.502 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.502 "name": "raid_bdev1", 00:28:19.502 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:19.502 "strip_size_kb": 0, 00:28:19.502 "state": "configuring", 00:28:19.502 "raid_level": "raid1", 00:28:19.502 "superblock": true, 00:28:19.502 "num_base_bdevs": 2, 00:28:19.502 "num_base_bdevs_discovered": 1, 00:28:19.502 "num_base_bdevs_operational": 2, 00:28:19.502 "base_bdevs_list": [ 00:28:19.502 { 00:28:19.502 "name": "pt1", 00:28:19.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:19.502 "is_configured": true, 00:28:19.502 "data_offset": 256, 00:28:19.502 "data_size": 7936 00:28:19.502 }, 00:28:19.503 { 00:28:19.503 "name": null, 00:28:19.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:19.503 "is_configured": false, 00:28:19.503 "data_offset": 256, 00:28:19.503 "data_size": 7936 00:28:19.503 } 00:28:19.503 ] 00:28:19.503 }' 00:28:19.503 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.503 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.760 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:28:19.760 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:28:19.760 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:28:19.760 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:19.760 [2024-07-15 09:54:47.861867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:19.760 [2024-07-15 09:54:47.861917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.760 [2024-07-15 09:54:47.861926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a598434f00 00:28:19.760 [2024-07-15 09:54:47.861933] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.760 [2024-07-15 09:54:47.861977] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.760 [2024-07-15 09:54:47.861984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:19.760 [2024-07-15 09:54:47.861995] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:19.760 [2024-07-15 09:54:47.862002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:19.760 [2024-07-15 09:54:47.862019] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a598435180 00:28:19.760 [2024-07-15 09:54:47.862024] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:19.760 [2024-07-15 09:54:47.862039] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a598497e20 00:28:19.760 [2024-07-15 09:54:47.862051] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a598435180 00:28:19.760 [2024-07-15 09:54:47.862054] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9a598435180 00:28:19.760 [2024-07-15 09:54:47.862064] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:20.018 pt2 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.018 09:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.018 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.018 "name": "raid_bdev1", 00:28:20.018 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:20.018 "strip_size_kb": 0, 00:28:20.018 "state": "online", 00:28:20.018 "raid_level": "raid1", 00:28:20.018 "superblock": true, 00:28:20.018 "num_base_bdevs": 2, 00:28:20.018 "num_base_bdevs_discovered": 2, 00:28:20.018 "num_base_bdevs_operational": 2, 00:28:20.018 "base_bdevs_list": [ 00:28:20.018 { 00:28:20.018 "name": "pt1", 00:28:20.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:20.018 "is_configured": true, 00:28:20.018 "data_offset": 256, 00:28:20.018 "data_size": 7936 00:28:20.018 }, 00:28:20.018 { 00:28:20.018 "name": "pt2", 00:28:20.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:20.018 "is_configured": true, 00:28:20.018 "data_offset": 256, 00:28:20.018 "data_size": 7936 00:28:20.018 } 00:28:20.018 ] 00:28:20.018 }' 00:28:20.018 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.018 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:20.277 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:20.536 [2024-07-15 09:54:48.541927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:20.536 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:20.536 "name": "raid_bdev1", 00:28:20.536 "aliases": [ 00:28:20.536 "4322dc88-4290-11ef-a0af-c98d8ee52a94" 00:28:20.536 ], 00:28:20.536 "product_name": "Raid Volume", 00:28:20.536 "block_size": 4128, 00:28:20.536 "num_blocks": 7936, 00:28:20.536 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:20.536 "md_size": 32, 00:28:20.536 "md_interleave": true, 00:28:20.536 "dif_type": 0, 00:28:20.536 "assigned_rate_limits": { 00:28:20.536 "rw_ios_per_sec": 0, 00:28:20.536 "rw_mbytes_per_sec": 0, 00:28:20.536 "r_mbytes_per_sec": 0, 00:28:20.536 "w_mbytes_per_sec": 0 00:28:20.536 }, 00:28:20.536 "claimed": false, 00:28:20.536 "zoned": false, 00:28:20.536 "supported_io_types": { 00:28:20.536 "read": true, 00:28:20.536 "write": true, 00:28:20.536 "unmap": false, 00:28:20.536 "flush": false, 00:28:20.536 "reset": true, 00:28:20.536 "nvme_admin": false, 00:28:20.536 "nvme_io": false, 00:28:20.536 "nvme_io_md": false, 00:28:20.536 "write_zeroes": true, 00:28:20.536 "zcopy": false, 00:28:20.536 "get_zone_info": false, 00:28:20.536 "zone_management": false, 00:28:20.536 "zone_append": false, 00:28:20.536 "compare": false, 00:28:20.536 "compare_and_write": false, 00:28:20.536 "abort": false, 00:28:20.536 "seek_hole": false, 00:28:20.536 "seek_data": false, 00:28:20.536 "copy": false, 00:28:20.536 "nvme_iov_md": false 00:28:20.536 }, 00:28:20.536 "memory_domains": [ 00:28:20.536 { 00:28:20.536 "dma_device_id": "system", 00:28:20.536 "dma_device_type": 1 00:28:20.536 }, 00:28:20.536 { 00:28:20.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.536 "dma_device_type": 2 00:28:20.536 }, 00:28:20.536 { 00:28:20.536 "dma_device_id": "system", 00:28:20.536 "dma_device_type": 1 00:28:20.536 }, 00:28:20.536 { 00:28:20.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.536 "dma_device_type": 2 00:28:20.536 } 00:28:20.536 ], 00:28:20.536 "driver_specific": { 00:28:20.536 "raid": { 00:28:20.536 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:20.536 "strip_size_kb": 0, 00:28:20.536 "state": "online", 00:28:20.536 "raid_level": "raid1", 00:28:20.536 "superblock": true, 00:28:20.536 "num_base_bdevs": 2, 00:28:20.536 "num_base_bdevs_discovered": 2, 00:28:20.536 "num_base_bdevs_operational": 2, 00:28:20.536 "base_bdevs_list": [ 00:28:20.536 { 00:28:20.536 "name": "pt1", 00:28:20.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:20.536 "is_configured": true, 00:28:20.536 "data_offset": 256, 00:28:20.536 "data_size": 7936 00:28:20.536 }, 00:28:20.536 { 00:28:20.536 "name": "pt2", 00:28:20.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:20.536 "is_configured": true, 00:28:20.536 "data_offset": 256, 00:28:20.536 "data_size": 7936 00:28:20.536 } 00:28:20.536 ] 00:28:20.536 } 00:28:20.536 } 00:28:20.536 }' 00:28:20.536 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:20.536 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:20.536 pt2' 00:28:20.536 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:20.536 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:20.536 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:20.795 "name": "pt1", 00:28:20.795 "aliases": [ 00:28:20.795 "00000000-0000-0000-0000-000000000001" 00:28:20.795 ], 00:28:20.795 "product_name": "passthru", 00:28:20.795 "block_size": 4128, 00:28:20.795 "num_blocks": 8192, 00:28:20.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:20.795 "md_size": 32, 00:28:20.795 "md_interleave": true, 00:28:20.795 "dif_type": 0, 00:28:20.795 "assigned_rate_limits": { 00:28:20.795 "rw_ios_per_sec": 0, 00:28:20.795 "rw_mbytes_per_sec": 0, 00:28:20.795 "r_mbytes_per_sec": 0, 00:28:20.795 "w_mbytes_per_sec": 0 00:28:20.795 }, 00:28:20.795 "claimed": true, 00:28:20.795 "claim_type": "exclusive_write", 00:28:20.795 "zoned": false, 00:28:20.795 "supported_io_types": { 00:28:20.795 "read": true, 00:28:20.795 "write": true, 00:28:20.795 "unmap": true, 00:28:20.795 "flush": true, 00:28:20.795 "reset": true, 00:28:20.795 "nvme_admin": false, 00:28:20.795 "nvme_io": false, 00:28:20.795 "nvme_io_md": false, 00:28:20.795 "write_zeroes": true, 00:28:20.795 "zcopy": true, 00:28:20.795 "get_zone_info": false, 00:28:20.795 "zone_management": false, 00:28:20.795 "zone_append": false, 00:28:20.795 "compare": false, 00:28:20.795 "compare_and_write": false, 00:28:20.795 "abort": true, 00:28:20.795 "seek_hole": false, 00:28:20.795 "seek_data": false, 00:28:20.795 "copy": true, 00:28:20.795 "nvme_iov_md": false 00:28:20.795 }, 00:28:20.795 "memory_domains": [ 00:28:20.795 { 00:28:20.795 "dma_device_id": "system", 00:28:20.795 "dma_device_type": 1 00:28:20.795 }, 00:28:20.795 { 00:28:20.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.795 "dma_device_type": 2 00:28:20.795 } 00:28:20.795 ], 00:28:20.795 "driver_specific": { 00:28:20.795 "passthru": { 00:28:20.795 "name": "pt1", 00:28:20.795 "base_bdev_name": "malloc1" 00:28:20.795 } 00:28:20.795 } 00:28:20.795 }' 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:20.795 09:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:21.054 "name": "pt2", 00:28:21.054 "aliases": [ 00:28:21.054 "00000000-0000-0000-0000-000000000002" 00:28:21.054 ], 00:28:21.054 "product_name": "passthru", 00:28:21.054 "block_size": 4128, 00:28:21.054 "num_blocks": 8192, 00:28:21.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:21.054 "md_size": 32, 00:28:21.054 "md_interleave": true, 00:28:21.054 "dif_type": 0, 00:28:21.054 "assigned_rate_limits": { 00:28:21.054 "rw_ios_per_sec": 0, 00:28:21.054 "rw_mbytes_per_sec": 0, 00:28:21.054 "r_mbytes_per_sec": 0, 00:28:21.054 "w_mbytes_per_sec": 0 00:28:21.054 }, 00:28:21.054 "claimed": true, 00:28:21.054 "claim_type": "exclusive_write", 00:28:21.054 "zoned": false, 00:28:21.054 "supported_io_types": { 00:28:21.054 "read": true, 00:28:21.054 "write": true, 00:28:21.054 "unmap": true, 00:28:21.054 "flush": true, 00:28:21.054 "reset": true, 00:28:21.054 "nvme_admin": false, 00:28:21.054 "nvme_io": false, 00:28:21.054 "nvme_io_md": false, 00:28:21.054 "write_zeroes": true, 00:28:21.054 "zcopy": true, 00:28:21.054 "get_zone_info": false, 00:28:21.054 "zone_management": false, 00:28:21.054 "zone_append": false, 00:28:21.054 "compare": false, 00:28:21.054 "compare_and_write": false, 00:28:21.054 "abort": true, 00:28:21.054 "seek_hole": false, 00:28:21.054 "seek_data": false, 00:28:21.054 "copy": true, 00:28:21.054 "nvme_iov_md": false 00:28:21.054 }, 00:28:21.054 "memory_domains": [ 00:28:21.054 { 00:28:21.054 "dma_device_id": "system", 00:28:21.054 "dma_device_type": 1 00:28:21.054 }, 00:28:21.054 { 00:28:21.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.054 "dma_device_type": 2 00:28:21.054 } 00:28:21.054 ], 00:28:21.054 "driver_specific": { 00:28:21.054 "passthru": { 00:28:21.054 "name": "pt2", 00:28:21.054 "base_bdev_name": "malloc2" 00:28:21.054 } 00:28:21.054 } 00:28:21.054 }' 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:21.054 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:28:21.312 [2024-07-15 09:54:49.333959] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 4322dc88-4290-11ef-a0af-c98d8ee52a94 '!=' 4322dc88-4290-11ef-a0af-c98d8ee52a94 ']' 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:21.312 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:21.570 [2024-07-15 09:54:49.513945] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.570 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.003 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.003 "name": "raid_bdev1", 00:28:22.003 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:22.003 "strip_size_kb": 0, 00:28:22.003 "state": "online", 00:28:22.003 "raid_level": "raid1", 00:28:22.003 "superblock": true, 00:28:22.003 "num_base_bdevs": 2, 00:28:22.003 "num_base_bdevs_discovered": 1, 00:28:22.003 "num_base_bdevs_operational": 1, 00:28:22.003 "base_bdevs_list": [ 00:28:22.003 { 00:28:22.003 "name": null, 00:28:22.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.003 "is_configured": false, 00:28:22.003 "data_offset": 256, 00:28:22.003 "data_size": 7936 00:28:22.003 }, 00:28:22.003 { 00:28:22.003 "name": "pt2", 00:28:22.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:22.003 "is_configured": true, 00:28:22.003 "data_offset": 256, 00:28:22.003 "data_size": 7936 00:28:22.003 } 00:28:22.003 ] 00:28:22.003 }' 00:28:22.003 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.003 09:54:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:22.003 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:22.263 [2024-07-15 09:54:50.246011] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:22.263 [2024-07-15 09:54:50.246035] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:22.263 [2024-07-15 09:54:50.246048] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:22.263 [2024-07-15 09:54:50.246056] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:22.263 [2024-07-15 09:54:50.246060] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a598435180 name raid_bdev1, state offline 00:28:22.263 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.263 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:28:22.521 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:28:22.521 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:28:22.521 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:28:22.521 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:22.521 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:22.781 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:22.781 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:22.781 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:28:22.781 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:22.781 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:28:22.781 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:22.781 [2024-07-15 09:54:50.862073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:22.781 [2024-07-15 09:54:50.862126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.781 [2024-07-15 09:54:50.862135] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a598434f00 00:28:22.781 [2024-07-15 09:54:50.862142] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.781 [2024-07-15 09:54:50.862827] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.781 [2024-07-15 09:54:50.862856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:22.781 [2024-07-15 09:54:50.862871] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:22.781 [2024-07-15 09:54:50.862882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:22.781 [2024-07-15 09:54:50.862899] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a598435180 00:28:22.781 [2024-07-15 09:54:50.862902] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:22.782 [2024-07-15 09:54:50.862922] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a598497e20 00:28:22.782 [2024-07-15 09:54:50.862934] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a598435180 00:28:22.782 [2024-07-15 09:54:50.862937] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9a598435180 00:28:22.782 [2024-07-15 09:54:50.862945] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.782 pt2 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.782 09:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.040 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.040 "name": "raid_bdev1", 00:28:23.040 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:23.040 "strip_size_kb": 0, 00:28:23.040 "state": "online", 00:28:23.040 "raid_level": "raid1", 00:28:23.040 "superblock": true, 00:28:23.040 "num_base_bdevs": 2, 00:28:23.040 "num_base_bdevs_discovered": 1, 00:28:23.040 "num_base_bdevs_operational": 1, 00:28:23.040 "base_bdevs_list": [ 00:28:23.040 { 00:28:23.040 "name": null, 00:28:23.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.040 "is_configured": false, 00:28:23.040 "data_offset": 256, 00:28:23.040 "data_size": 7936 00:28:23.040 }, 00:28:23.040 { 00:28:23.040 "name": "pt2", 00:28:23.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:23.040 "is_configured": true, 00:28:23.040 "data_offset": 256, 00:28:23.040 "data_size": 7936 00:28:23.040 } 00:28:23.040 ] 00:28:23.040 }' 00:28:23.040 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.040 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.298 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:23.556 [2024-07-15 09:54:51.526100] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:23.556 [2024-07-15 09:54:51.526133] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:23.556 [2024-07-15 09:54:51.526153] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.556 [2024-07-15 09:54:51.526163] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:23.556 [2024-07-15 09:54:51.526167] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a598435180 name raid_bdev1, state offline 00:28:23.556 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.556 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:28:23.813 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:28:23.813 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:28:23.813 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:28:23.813 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:24.072 [2024-07-15 09:54:51.918133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:24.072 [2024-07-15 09:54:51.918188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.072 [2024-07-15 09:54:51.918196] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a598434c80 00:28:24.072 [2024-07-15 09:54:51.918203] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.072 [2024-07-15 09:54:51.918880] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.072 [2024-07-15 09:54:51.918907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:24.072 [2024-07-15 09:54:51.918924] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:24.072 [2024-07-15 09:54:51.918935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:24.072 [2024-07-15 09:54:51.918956] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:24.072 [2024-07-15 09:54:51.918960] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.072 [2024-07-15 09:54:51.918965] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a598434780 name raid_bdev1, state configuring 00:28:24.072 [2024-07-15 09:54:51.918976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:24.072 [2024-07-15 09:54:51.918989] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a598434780 00:28:24.072 [2024-07-15 09:54:51.918992] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:24.072 [2024-07-15 09:54:51.919011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a598497e20 00:28:24.072 [2024-07-15 09:54:51.919022] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a598434780 00:28:24.072 [2024-07-15 09:54:51.919025] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9a598434780 00:28:24.072 [2024-07-15 09:54:51.919033] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.072 pt1 00:28:24.072 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:28:24.072 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.073 09:54:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.073 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.073 "name": "raid_bdev1", 00:28:24.073 "uuid": "4322dc88-4290-11ef-a0af-c98d8ee52a94", 00:28:24.073 "strip_size_kb": 0, 00:28:24.073 "state": "online", 00:28:24.073 "raid_level": "raid1", 00:28:24.073 "superblock": true, 00:28:24.073 "num_base_bdevs": 2, 00:28:24.073 "num_base_bdevs_discovered": 1, 00:28:24.073 "num_base_bdevs_operational": 1, 00:28:24.073 "base_bdevs_list": [ 00:28:24.073 { 00:28:24.073 "name": null, 00:28:24.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.073 "is_configured": false, 00:28:24.073 "data_offset": 256, 00:28:24.073 "data_size": 7936 00:28:24.073 }, 00:28:24.073 { 00:28:24.073 "name": "pt2", 00:28:24.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:24.073 "is_configured": true, 00:28:24.073 "data_offset": 256, 00:28:24.073 "data_size": 7936 00:28:24.073 } 00:28:24.073 ] 00:28:24.073 }' 00:28:24.073 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.073 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.331 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:24.331 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:24.588 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:28:24.588 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:28:24.588 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:24.846 [2024-07-15 09:54:52.846239] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:24.846 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 4322dc88-4290-11ef-a0af-c98d8ee52a94 '!=' 4322dc88-4290-11ef-a0af-c98d8ee52a94 ']' 00:28:24.846 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 66834 00:28:24.846 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66834 ']' 00:28:24.846 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66834 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66834 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:28:24.847 killing process with pid 66834 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66834' 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 66834 00:28:24.847 [2024-07-15 09:54:52.876571] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:24.847 [2024-07-15 09:54:52.876590] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:24.847 [2024-07-15 09:54:52.876610] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:24.847 [2024-07-15 09:54:52.876614] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a598434780 name raid_bdev1, state offline 00:28:24.847 09:54:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 66834 00:28:24.847 [2024-07-15 09:54:52.894756] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:25.105 09:54:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:28:25.105 00:28:25.105 real 0m11.042s 00:28:25.105 user 0m19.263s 00:28:25.105 sys 0m2.048s 00:28:25.105 09:54:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.105 09:54:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 ************************************ 00:28:25.105 END TEST raid_superblock_test_md_interleaved 00:28:25.105 ************************************ 00:28:25.105 09:54:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:25.105 09:54:53 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:25.105 09:54:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:25.105 09:54:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.105 09:54:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:25.363 ************************************ 00:28:25.363 START TEST raid_rebuild_test_sb_md_interleaved 00:28:25.363 ************************************ 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67217 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67217 /var/tmp/spdk-raid.sock 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67217 ']' 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.363 09:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.363 [2024-07-15 09:54:53.236791] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:25.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:25.363 Zero copy mechanism will not be used. 00:28:25.363 [2024-07-15 09:54:53.237124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:25.619 EAL: TSC is not safe to use in SMP mode 00:28:25.619 EAL: TSC is not invariant 00:28:25.619 [2024-07-15 09:54:53.671138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.876 [2024-07-15 09:54:53.786523] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:25.876 [2024-07-15 09:54:53.789020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.876 [2024-07-15 09:54:53.789740] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:25.876 [2024-07-15 09:54:53.789753] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:26.443 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.443 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:28:26.443 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:26.443 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:26.443 BaseBdev1_malloc 00:28:26.443 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:26.701 [2024-07-15 09:54:54.624956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:26.701 [2024-07-15 09:54:54.625025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.701 [2024-07-15 09:54:54.625629] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97fa4434780 00:28:26.701 [2024-07-15 09:54:54.625655] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.702 [2024-07-15 09:54:54.626339] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.702 [2024-07-15 09:54:54.626369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:26.702 BaseBdev1 00:28:26.702 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:26.702 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:27.028 BaseBdev2_malloc 00:28:27.028 09:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:27.028 [2024-07-15 09:54:55.052994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:27.028 [2024-07-15 09:54:55.053064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.028 [2024-07-15 09:54:55.053097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97fa4434c80 00:28:27.028 [2024-07-15 09:54:55.053104] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.028 [2024-07-15 09:54:55.053829] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.028 [2024-07-15 09:54:55.053855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:27.028 BaseBdev2 00:28:27.028 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:27.286 spare_malloc 00:28:27.286 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:27.544 spare_delay 00:28:27.544 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:27.802 [2024-07-15 09:54:55.701009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:27.802 [2024-07-15 09:54:55.701071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.802 [2024-07-15 09:54:55.701100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97fa4435400 00:28:27.802 [2024-07-15 09:54:55.701107] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.802 [2024-07-15 09:54:55.701735] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.802 [2024-07-15 09:54:55.701762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:27.802 spare 00:28:27.802 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:27.802 [2024-07-15 09:54:55.901035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:27.802 [2024-07-15 09:54:55.901668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:27.802 [2024-07-15 09:54:55.901741] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x97fa4435680 00:28:27.802 [2024-07-15 09:54:55.901748] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:27.802 [2024-07-15 09:54:55.901794] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97fa4497e20 00:28:27.802 [2024-07-15 09:54:55.901806] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x97fa4435680 00:28:27.803 [2024-07-15 09:54:55.901810] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x97fa4435680 00:28:27.803 [2024-07-15 09:54:55.901825] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.061 09:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.061 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.061 "name": "raid_bdev1", 00:28:28.061 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:28.061 "strip_size_kb": 0, 00:28:28.061 "state": "online", 00:28:28.061 "raid_level": "raid1", 00:28:28.061 "superblock": true, 00:28:28.061 "num_base_bdevs": 2, 00:28:28.061 "num_base_bdevs_discovered": 2, 00:28:28.061 "num_base_bdevs_operational": 2, 00:28:28.061 "base_bdevs_list": [ 00:28:28.061 { 00:28:28.061 "name": "BaseBdev1", 00:28:28.061 "uuid": "303ae4d7-0704-5c56-bc01-667e4d7d835d", 00:28:28.061 "is_configured": true, 00:28:28.061 "data_offset": 256, 00:28:28.061 "data_size": 7936 00:28:28.061 }, 00:28:28.061 { 00:28:28.061 "name": "BaseBdev2", 00:28:28.061 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:28.061 "is_configured": true, 00:28:28.061 "data_offset": 256, 00:28:28.061 "data_size": 7936 00:28:28.061 } 00:28:28.061 ] 00:28:28.061 }' 00:28:28.061 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.061 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:28.628 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:28.628 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:28.628 [2024-07-15 09:54:56.657110] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:28.628 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:28:28.628 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.628 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:28.888 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:28:28.888 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:28.888 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:28:28.888 09:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:29.147 [2024-07-15 09:54:57.073085] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.147 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.407 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.407 "name": "raid_bdev1", 00:28:29.407 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:29.407 "strip_size_kb": 0, 00:28:29.407 "state": "online", 00:28:29.407 "raid_level": "raid1", 00:28:29.407 "superblock": true, 00:28:29.407 "num_base_bdevs": 2, 00:28:29.407 "num_base_bdevs_discovered": 1, 00:28:29.407 "num_base_bdevs_operational": 1, 00:28:29.407 "base_bdevs_list": [ 00:28:29.407 { 00:28:29.407 "name": null, 00:28:29.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.407 "is_configured": false, 00:28:29.407 "data_offset": 256, 00:28:29.407 "data_size": 7936 00:28:29.407 }, 00:28:29.407 { 00:28:29.407 "name": "BaseBdev2", 00:28:29.407 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:29.407 "is_configured": true, 00:28:29.407 "data_offset": 256, 00:28:29.407 "data_size": 7936 00:28:29.407 } 00:28:29.407 ] 00:28:29.407 }' 00:28:29.407 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.407 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.702 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:29.702 [2024-07-15 09:54:57.797127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:29.702 [2024-07-15 09:54:57.797282] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97fa4497ec0 00:28:29.702 [2024-07-15 09:54:57.798283] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:29.962 09:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.895 09:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.154 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.154 "name": "raid_bdev1", 00:28:31.154 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:31.154 "strip_size_kb": 0, 00:28:31.154 "state": "online", 00:28:31.154 "raid_level": "raid1", 00:28:31.154 "superblock": true, 00:28:31.154 "num_base_bdevs": 2, 00:28:31.154 "num_base_bdevs_discovered": 2, 00:28:31.154 "num_base_bdevs_operational": 2, 00:28:31.154 "process": { 00:28:31.154 "type": "rebuild", 00:28:31.154 "target": "spare", 00:28:31.154 "progress": { 00:28:31.154 "blocks": 3072, 00:28:31.154 "percent": 38 00:28:31.154 } 00:28:31.154 }, 00:28:31.154 "base_bdevs_list": [ 00:28:31.154 { 00:28:31.154 "name": "spare", 00:28:31.154 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:31.154 "is_configured": true, 00:28:31.154 "data_offset": 256, 00:28:31.154 "data_size": 7936 00:28:31.154 }, 00:28:31.154 { 00:28:31.154 "name": "BaseBdev2", 00:28:31.154 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:31.154 "is_configured": true, 00:28:31.154 "data_offset": 256, 00:28:31.154 "data_size": 7936 00:28:31.154 } 00:28:31.154 ] 00:28:31.154 }' 00:28:31.154 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.154 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:31.154 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.154 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.154 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:31.154 [2024-07-15 09:54:59.238373] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:31.414 [2024-07-15 09:54:59.308340] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:28:31.414 [2024-07-15 09:54:59.308385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.414 [2024-07-15 09:54:59.308389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:31.414 [2024-07-15 09:54:59.308393] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.414 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.673 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.673 "name": "raid_bdev1", 00:28:31.673 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:31.673 "strip_size_kb": 0, 00:28:31.673 "state": "online", 00:28:31.673 "raid_level": "raid1", 00:28:31.673 "superblock": true, 00:28:31.673 "num_base_bdevs": 2, 00:28:31.673 "num_base_bdevs_discovered": 1, 00:28:31.673 "num_base_bdevs_operational": 1, 00:28:31.673 "base_bdevs_list": [ 00:28:31.673 { 00:28:31.673 "name": null, 00:28:31.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.673 "is_configured": false, 00:28:31.673 "data_offset": 256, 00:28:31.673 "data_size": 7936 00:28:31.673 }, 00:28:31.673 { 00:28:31.673 "name": "BaseBdev2", 00:28:31.673 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:31.673 "is_configured": true, 00:28:31.673 "data_offset": 256, 00:28:31.673 "data_size": 7936 00:28:31.673 } 00:28:31.673 ] 00:28:31.673 }' 00:28:31.673 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.673 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.932 09:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.932 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.932 "name": "raid_bdev1", 00:28:31.932 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:31.932 "strip_size_kb": 0, 00:28:31.932 "state": "online", 00:28:31.932 "raid_level": "raid1", 00:28:31.932 "superblock": true, 00:28:31.932 "num_base_bdevs": 2, 00:28:31.932 "num_base_bdevs_discovered": 1, 00:28:31.932 "num_base_bdevs_operational": 1, 00:28:31.932 "base_bdevs_list": [ 00:28:31.932 { 00:28:31.932 "name": null, 00:28:31.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.932 "is_configured": false, 00:28:31.932 "data_offset": 256, 00:28:31.932 "data_size": 7936 00:28:31.932 }, 00:28:31.932 { 00:28:31.932 "name": "BaseBdev2", 00:28:31.932 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:31.932 "is_configured": true, 00:28:31.932 "data_offset": 256, 00:28:31.932 "data_size": 7936 00:28:31.932 } 00:28:31.932 ] 00:28:31.932 }' 00:28:31.932 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.932 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:31.932 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.932 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:31.932 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:32.191 [2024-07-15 09:55:00.204655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:32.191 [2024-07-15 09:55:00.204799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97fa4497e20 00:28:32.191 [2024-07-15 09:55:00.205793] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:32.191 09:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:33.582 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:33.583 "name": "raid_bdev1", 00:28:33.583 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:33.583 "strip_size_kb": 0, 00:28:33.583 "state": "online", 00:28:33.583 "raid_level": "raid1", 00:28:33.583 "superblock": true, 00:28:33.583 "num_base_bdevs": 2, 00:28:33.583 "num_base_bdevs_discovered": 2, 00:28:33.583 "num_base_bdevs_operational": 2, 00:28:33.583 "process": { 00:28:33.583 "type": "rebuild", 00:28:33.583 "target": "spare", 00:28:33.583 "progress": { 00:28:33.583 "blocks": 3072, 00:28:33.583 "percent": 38 00:28:33.583 } 00:28:33.583 }, 00:28:33.583 "base_bdevs_list": [ 00:28:33.583 { 00:28:33.583 "name": "spare", 00:28:33.583 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:33.583 "is_configured": true, 00:28:33.583 "data_offset": 256, 00:28:33.583 "data_size": 7936 00:28:33.583 }, 00:28:33.583 { 00:28:33.583 "name": "BaseBdev2", 00:28:33.583 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:33.583 "is_configured": true, 00:28:33.583 "data_offset": 256, 00:28:33.583 "data_size": 7936 00:28:33.583 } 00:28:33.583 ] 00:28:33.583 }' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:33.583 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=626 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.583 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.841 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:33.841 "name": "raid_bdev1", 00:28:33.841 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:33.841 "strip_size_kb": 0, 00:28:33.841 "state": "online", 00:28:33.841 "raid_level": "raid1", 00:28:33.841 "superblock": true, 00:28:33.841 "num_base_bdevs": 2, 00:28:33.841 "num_base_bdevs_discovered": 2, 00:28:33.841 "num_base_bdevs_operational": 2, 00:28:33.841 "process": { 00:28:33.841 "type": "rebuild", 00:28:33.841 "target": "spare", 00:28:33.841 "progress": { 00:28:33.841 "blocks": 3584, 00:28:33.841 "percent": 45 00:28:33.841 } 00:28:33.841 }, 00:28:33.841 "base_bdevs_list": [ 00:28:33.841 { 00:28:33.841 "name": "spare", 00:28:33.841 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:33.841 "is_configured": true, 00:28:33.841 "data_offset": 256, 00:28:33.841 "data_size": 7936 00:28:33.841 }, 00:28:33.841 { 00:28:33.841 "name": "BaseBdev2", 00:28:33.841 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:33.841 "is_configured": true, 00:28:33.841 "data_offset": 256, 00:28:33.841 "data_size": 7936 00:28:33.841 } 00:28:33.841 ] 00:28:33.841 }' 00:28:33.841 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:33.841 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:33.841 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:33.841 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.841 09:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.777 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.036 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.036 "name": "raid_bdev1", 00:28:35.036 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:35.037 "strip_size_kb": 0, 00:28:35.037 "state": "online", 00:28:35.037 "raid_level": "raid1", 00:28:35.037 "superblock": true, 00:28:35.037 "num_base_bdevs": 2, 00:28:35.037 "num_base_bdevs_discovered": 2, 00:28:35.037 "num_base_bdevs_operational": 2, 00:28:35.037 "process": { 00:28:35.037 "type": "rebuild", 00:28:35.037 "target": "spare", 00:28:35.037 "progress": { 00:28:35.037 "blocks": 6912, 00:28:35.037 "percent": 87 00:28:35.037 } 00:28:35.037 }, 00:28:35.037 "base_bdevs_list": [ 00:28:35.037 { 00:28:35.037 "name": "spare", 00:28:35.037 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:35.037 "is_configured": true, 00:28:35.037 "data_offset": 256, 00:28:35.037 "data_size": 7936 00:28:35.037 }, 00:28:35.037 { 00:28:35.037 "name": "BaseBdev2", 00:28:35.037 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:35.037 "is_configured": true, 00:28:35.037 "data_offset": 256, 00:28:35.037 "data_size": 7936 00:28:35.037 } 00:28:35.037 ] 00:28:35.037 }' 00:28:35.037 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.037 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.037 09:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.037 09:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.037 09:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:35.311 [2024-07-15 09:55:03.325775] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:35.311 [2024-07-15 09:55:03.325822] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:35.311 [2024-07-15 09:55:03.325890] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.316 "name": "raid_bdev1", 00:28:36.316 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:36.316 "strip_size_kb": 0, 00:28:36.316 "state": "online", 00:28:36.316 "raid_level": "raid1", 00:28:36.316 "superblock": true, 00:28:36.316 "num_base_bdevs": 2, 00:28:36.316 "num_base_bdevs_discovered": 2, 00:28:36.316 "num_base_bdevs_operational": 2, 00:28:36.316 "base_bdevs_list": [ 00:28:36.316 { 00:28:36.316 "name": "spare", 00:28:36.316 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:36.316 "is_configured": true, 00:28:36.316 "data_offset": 256, 00:28:36.316 "data_size": 7936 00:28:36.316 }, 00:28:36.316 { 00:28:36.316 "name": "BaseBdev2", 00:28:36.316 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:36.316 "is_configured": true, 00:28:36.316 "data_offset": 256, 00:28:36.316 "data_size": 7936 00:28:36.316 } 00:28:36.316 ] 00:28:36.316 }' 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.316 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.574 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.574 "name": "raid_bdev1", 00:28:36.574 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:36.574 "strip_size_kb": 0, 00:28:36.574 "state": "online", 00:28:36.574 "raid_level": "raid1", 00:28:36.574 "superblock": true, 00:28:36.574 "num_base_bdevs": 2, 00:28:36.574 "num_base_bdevs_discovered": 2, 00:28:36.574 "num_base_bdevs_operational": 2, 00:28:36.574 "base_bdevs_list": [ 00:28:36.574 { 00:28:36.574 "name": "spare", 00:28:36.574 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:36.574 "is_configured": true, 00:28:36.574 "data_offset": 256, 00:28:36.574 "data_size": 7936 00:28:36.574 }, 00:28:36.574 { 00:28:36.574 "name": "BaseBdev2", 00:28:36.574 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:36.574 "is_configured": true, 00:28:36.574 "data_offset": 256, 00:28:36.574 "data_size": 7936 00:28:36.574 } 00:28:36.574 ] 00:28:36.574 }' 00:28:36.574 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.574 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:36.574 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.574 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.575 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.832 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:36.832 "name": "raid_bdev1", 00:28:36.832 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:36.832 "strip_size_kb": 0, 00:28:36.832 "state": "online", 00:28:36.832 "raid_level": "raid1", 00:28:36.832 "superblock": true, 00:28:36.832 "num_base_bdevs": 2, 00:28:36.832 "num_base_bdevs_discovered": 2, 00:28:36.832 "num_base_bdevs_operational": 2, 00:28:36.832 "base_bdevs_list": [ 00:28:36.832 { 00:28:36.832 "name": "spare", 00:28:36.832 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:36.832 "is_configured": true, 00:28:36.832 "data_offset": 256, 00:28:36.832 "data_size": 7936 00:28:36.832 }, 00:28:36.832 { 00:28:36.832 "name": "BaseBdev2", 00:28:36.832 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:36.832 "is_configured": true, 00:28:36.832 "data_offset": 256, 00:28:36.832 "data_size": 7936 00:28:36.832 } 00:28:36.832 ] 00:28:36.832 }' 00:28:36.832 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:36.832 09:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:37.091 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:37.350 [2024-07-15 09:55:05.222203] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:37.350 [2024-07-15 09:55:05.222233] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:37.350 [2024-07-15 09:55:05.222257] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:37.350 [2024-07-15 09:55:05.222274] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:37.350 [2024-07-15 09:55:05.222278] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x97fa4435680 name raid_bdev1, state offline 00:28:37.350 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.350 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:28:37.609 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:37.609 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:28:37.609 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:37.609 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:37.609 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:37.868 [2024-07-15 09:55:05.862234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:37.868 [2024-07-15 09:55:05.862315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.868 [2024-07-15 09:55:05.862349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97fa4435400 00:28:37.868 [2024-07-15 09:55:05.862357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.868 [2024-07-15 09:55:05.863071] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.868 [2024-07-15 09:55:05.863105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:37.868 [2024-07-15 09:55:05.863127] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:37.868 [2024-07-15 09:55:05.863141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:37.868 [2024-07-15 09:55:05.863165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:37.868 spare 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.868 09:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.868 [2024-07-15 09:55:05.963182] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x97fa4435680 00:28:37.868 [2024-07-15 09:55:05.963204] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:37.868 [2024-07-15 09:55:05.963239] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97fa4497e20 00:28:37.868 [2024-07-15 09:55:05.963254] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x97fa4435680 00:28:37.868 [2024-07-15 09:55:05.963257] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x97fa4435680 00:28:37.868 [2024-07-15 09:55:05.963269] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.127 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.127 "name": "raid_bdev1", 00:28:38.127 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:38.127 "strip_size_kb": 0, 00:28:38.127 "state": "online", 00:28:38.127 "raid_level": "raid1", 00:28:38.127 "superblock": true, 00:28:38.127 "num_base_bdevs": 2, 00:28:38.127 "num_base_bdevs_discovered": 2, 00:28:38.127 "num_base_bdevs_operational": 2, 00:28:38.127 "base_bdevs_list": [ 00:28:38.127 { 00:28:38.127 "name": "spare", 00:28:38.127 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:38.127 "is_configured": true, 00:28:38.127 "data_offset": 256, 00:28:38.127 "data_size": 7936 00:28:38.127 }, 00:28:38.127 { 00:28:38.127 "name": "BaseBdev2", 00:28:38.127 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:38.127 "is_configured": true, 00:28:38.127 "data_offset": 256, 00:28:38.127 "data_size": 7936 00:28:38.127 } 00:28:38.127 ] 00:28:38.127 }' 00:28:38.127 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.127 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.385 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.647 "name": "raid_bdev1", 00:28:38.647 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:38.647 "strip_size_kb": 0, 00:28:38.647 "state": "online", 00:28:38.647 "raid_level": "raid1", 00:28:38.647 "superblock": true, 00:28:38.647 "num_base_bdevs": 2, 00:28:38.647 "num_base_bdevs_discovered": 2, 00:28:38.647 "num_base_bdevs_operational": 2, 00:28:38.647 "base_bdevs_list": [ 00:28:38.647 { 00:28:38.647 "name": "spare", 00:28:38.647 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:38.647 "is_configured": true, 00:28:38.647 "data_offset": 256, 00:28:38.647 "data_size": 7936 00:28:38.647 }, 00:28:38.647 { 00:28:38.647 "name": "BaseBdev2", 00:28:38.647 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:38.647 "is_configured": true, 00:28:38.647 "data_offset": 256, 00:28:38.647 "data_size": 7936 00:28:38.647 } 00:28:38.647 ] 00:28:38.647 }' 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.647 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:38.906 [2024-07-15 09:55:06.978355] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.906 09:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.166 09:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.166 "name": "raid_bdev1", 00:28:39.166 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:39.166 "strip_size_kb": 0, 00:28:39.166 "state": "online", 00:28:39.166 "raid_level": "raid1", 00:28:39.166 "superblock": true, 00:28:39.166 "num_base_bdevs": 2, 00:28:39.166 "num_base_bdevs_discovered": 1, 00:28:39.166 "num_base_bdevs_operational": 1, 00:28:39.166 "base_bdevs_list": [ 00:28:39.166 { 00:28:39.166 "name": null, 00:28:39.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.166 "is_configured": false, 00:28:39.166 "data_offset": 256, 00:28:39.166 "data_size": 7936 00:28:39.166 }, 00:28:39.166 { 00:28:39.166 "name": "BaseBdev2", 00:28:39.166 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:39.166 "is_configured": true, 00:28:39.166 "data_offset": 256, 00:28:39.166 "data_size": 7936 00:28:39.166 } 00:28:39.166 ] 00:28:39.166 }' 00:28:39.166 09:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.166 09:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:39.425 09:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:39.684 [2024-07-15 09:55:07.674392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:39.684 [2024-07-15 09:55:07.674471] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:39.684 [2024-07-15 09:55:07.674476] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:39.684 [2024-07-15 09:55:07.674513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:39.684 [2024-07-15 09:55:07.674608] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97fa4497ec0 00:28:39.684 [2024-07-15 09:55:07.675287] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:39.684 09:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.654 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.923 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.923 "name": "raid_bdev1", 00:28:40.923 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:40.923 "strip_size_kb": 0, 00:28:40.923 "state": "online", 00:28:40.923 "raid_level": "raid1", 00:28:40.923 "superblock": true, 00:28:40.923 "num_base_bdevs": 2, 00:28:40.923 "num_base_bdevs_discovered": 2, 00:28:40.923 "num_base_bdevs_operational": 2, 00:28:40.923 "process": { 00:28:40.923 "type": "rebuild", 00:28:40.923 "target": "spare", 00:28:40.923 "progress": { 00:28:40.923 "blocks": 3072, 00:28:40.923 "percent": 38 00:28:40.923 } 00:28:40.923 }, 00:28:40.923 "base_bdevs_list": [ 00:28:40.923 { 00:28:40.923 "name": "spare", 00:28:40.923 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:40.923 "is_configured": true, 00:28:40.923 "data_offset": 256, 00:28:40.923 "data_size": 7936 00:28:40.923 }, 00:28:40.923 { 00:28:40.923 "name": "BaseBdev2", 00:28:40.923 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:40.923 "is_configured": true, 00:28:40.923 "data_offset": 256, 00:28:40.923 "data_size": 7936 00:28:40.923 } 00:28:40.923 ] 00:28:40.923 }' 00:28:40.923 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.923 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.923 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.923 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.923 09:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:41.182 [2024-07-15 09:55:09.147104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:41.182 [2024-07-15 09:55:09.184967] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:28:41.182 [2024-07-15 09:55:09.185014] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.182 [2024-07-15 09:55:09.185019] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:41.182 [2024-07-15 09:55:09.185022] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.182 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.442 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.442 "name": "raid_bdev1", 00:28:41.442 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:41.442 "strip_size_kb": 0, 00:28:41.442 "state": "online", 00:28:41.442 "raid_level": "raid1", 00:28:41.442 "superblock": true, 00:28:41.442 "num_base_bdevs": 2, 00:28:41.442 "num_base_bdevs_discovered": 1, 00:28:41.442 "num_base_bdevs_operational": 1, 00:28:41.442 "base_bdevs_list": [ 00:28:41.442 { 00:28:41.442 "name": null, 00:28:41.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.442 "is_configured": false, 00:28:41.442 "data_offset": 256, 00:28:41.442 "data_size": 7936 00:28:41.442 }, 00:28:41.442 { 00:28:41.442 "name": "BaseBdev2", 00:28:41.442 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:41.442 "is_configured": true, 00:28:41.442 "data_offset": 256, 00:28:41.442 "data_size": 7936 00:28:41.442 } 00:28:41.442 ] 00:28:41.442 }' 00:28:41.442 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.442 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:41.701 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:41.959 [2024-07-15 09:55:09.857419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:41.959 [2024-07-15 09:55:09.857477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.959 [2024-07-15 09:55:09.857510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97fa4435400 00:28:41.959 [2024-07-15 09:55:09.857517] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.959 [2024-07-15 09:55:09.857585] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.959 [2024-07-15 09:55:09.857593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:41.959 [2024-07-15 09:55:09.857609] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:41.959 [2024-07-15 09:55:09.857614] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:41.959 [2024-07-15 09:55:09.857618] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:41.959 [2024-07-15 09:55:09.857628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:41.959 [2024-07-15 09:55:09.857720] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97fa4497e20 00:28:41.959 [2024-07-15 09:55:09.858423] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:41.959 spare 00:28:41.959 09:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.894 09:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.152 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.152 "name": "raid_bdev1", 00:28:43.152 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:43.152 "strip_size_kb": 0, 00:28:43.152 "state": "online", 00:28:43.152 "raid_level": "raid1", 00:28:43.152 "superblock": true, 00:28:43.152 "num_base_bdevs": 2, 00:28:43.152 "num_base_bdevs_discovered": 2, 00:28:43.152 "num_base_bdevs_operational": 2, 00:28:43.152 "process": { 00:28:43.152 "type": "rebuild", 00:28:43.152 "target": "spare", 00:28:43.152 "progress": { 00:28:43.152 "blocks": 3328, 00:28:43.152 "percent": 41 00:28:43.152 } 00:28:43.152 }, 00:28:43.152 "base_bdevs_list": [ 00:28:43.152 { 00:28:43.152 "name": "spare", 00:28:43.152 "uuid": "d21fe3a0-eb2d-035b-90f0-a0bdb941671f", 00:28:43.152 "is_configured": true, 00:28:43.152 "data_offset": 256, 00:28:43.152 "data_size": 7936 00:28:43.152 }, 00:28:43.152 { 00:28:43.152 "name": "BaseBdev2", 00:28:43.152 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:43.152 "is_configured": true, 00:28:43.152 "data_offset": 256, 00:28:43.152 "data_size": 7936 00:28:43.152 } 00:28:43.152 ] 00:28:43.152 }' 00:28:43.152 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.152 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:43.152 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.152 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:43.152 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:43.424 [2024-07-15 09:55:11.391611] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:43.425 [2024-07-15 09:55:11.468804] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:28:43.425 [2024-07-15 09:55:11.468864] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:43.425 [2024-07-15 09:55:11.468869] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:43.425 [2024-07-15 09:55:11.468872] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.425 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.684 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.684 "name": "raid_bdev1", 00:28:43.684 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:43.684 "strip_size_kb": 0, 00:28:43.684 "state": "online", 00:28:43.684 "raid_level": "raid1", 00:28:43.684 "superblock": true, 00:28:43.685 "num_base_bdevs": 2, 00:28:43.685 "num_base_bdevs_discovered": 1, 00:28:43.685 "num_base_bdevs_operational": 1, 00:28:43.685 "base_bdevs_list": [ 00:28:43.685 { 00:28:43.685 "name": null, 00:28:43.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.685 "is_configured": false, 00:28:43.685 "data_offset": 256, 00:28:43.685 "data_size": 7936 00:28:43.685 }, 00:28:43.685 { 00:28:43.685 "name": "BaseBdev2", 00:28:43.685 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:43.685 "is_configured": true, 00:28:43.685 "data_offset": 256, 00:28:43.685 "data_size": 7936 00:28:43.685 } 00:28:43.685 ] 00:28:43.685 }' 00:28:43.685 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.685 09:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.944 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.203 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:44.203 "name": "raid_bdev1", 00:28:44.203 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:44.203 "strip_size_kb": 0, 00:28:44.203 "state": "online", 00:28:44.203 "raid_level": "raid1", 00:28:44.203 "superblock": true, 00:28:44.203 "num_base_bdevs": 2, 00:28:44.203 "num_base_bdevs_discovered": 1, 00:28:44.203 "num_base_bdevs_operational": 1, 00:28:44.203 "base_bdevs_list": [ 00:28:44.203 { 00:28:44.203 "name": null, 00:28:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.203 "is_configured": false, 00:28:44.203 "data_offset": 256, 00:28:44.203 "data_size": 7936 00:28:44.203 }, 00:28:44.203 { 00:28:44.203 "name": "BaseBdev2", 00:28:44.203 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:44.203 "is_configured": true, 00:28:44.203 "data_offset": 256, 00:28:44.203 "data_size": 7936 00:28:44.203 } 00:28:44.203 ] 00:28:44.203 }' 00:28:44.203 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:44.203 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:44.203 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:44.203 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:44.203 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:44.462 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:44.721 [2024-07-15 09:55:12.601226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:44.721 [2024-07-15 09:55:12.601292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.721 [2024-07-15 09:55:12.601325] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97fa4434780 00:28:44.721 [2024-07-15 09:55:12.601333] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.721 [2024-07-15 09:55:12.601398] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.721 [2024-07-15 09:55:12.601406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:44.721 [2024-07-15 09:55:12.601423] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:44.721 [2024-07-15 09:55:12.601428] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:44.721 [2024-07-15 09:55:12.601432] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:44.721 BaseBdev1 00:28:44.721 09:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.660 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.918 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.918 "name": "raid_bdev1", 00:28:45.918 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:45.918 "strip_size_kb": 0, 00:28:45.918 "state": "online", 00:28:45.918 "raid_level": "raid1", 00:28:45.918 "superblock": true, 00:28:45.918 "num_base_bdevs": 2, 00:28:45.918 "num_base_bdevs_discovered": 1, 00:28:45.918 "num_base_bdevs_operational": 1, 00:28:45.918 "base_bdevs_list": [ 00:28:45.918 { 00:28:45.918 "name": null, 00:28:45.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.918 "is_configured": false, 00:28:45.918 "data_offset": 256, 00:28:45.918 "data_size": 7936 00:28:45.918 }, 00:28:45.918 { 00:28:45.918 "name": "BaseBdev2", 00:28:45.918 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:45.918 "is_configured": true, 00:28:45.918 "data_offset": 256, 00:28:45.918 "data_size": 7936 00:28:45.918 } 00:28:45.918 ] 00:28:45.918 }' 00:28:45.918 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.918 09:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.175 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.432 "name": "raid_bdev1", 00:28:46.432 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:46.432 "strip_size_kb": 0, 00:28:46.432 "state": "online", 00:28:46.432 "raid_level": "raid1", 00:28:46.432 "superblock": true, 00:28:46.432 "num_base_bdevs": 2, 00:28:46.432 "num_base_bdevs_discovered": 1, 00:28:46.432 "num_base_bdevs_operational": 1, 00:28:46.432 "base_bdevs_list": [ 00:28:46.432 { 00:28:46.432 "name": null, 00:28:46.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:46.432 "is_configured": false, 00:28:46.432 "data_offset": 256, 00:28:46.432 "data_size": 7936 00:28:46.432 }, 00:28:46.432 { 00:28:46.432 "name": "BaseBdev2", 00:28:46.432 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:46.432 "is_configured": true, 00:28:46.432 "data_offset": 256, 00:28:46.432 "data_size": 7936 00:28:46.432 } 00:28:46.432 ] 00:28:46.432 }' 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.432 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:46.433 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:46.433 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:46.691 [2024-07-15 09:55:14.545353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:46.691 [2024-07-15 09:55:14.545416] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:46.691 [2024-07-15 09:55:14.545420] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:46.691 request: 00:28:46.691 { 00:28:46.691 "base_bdev": "BaseBdev1", 00:28:46.691 "raid_bdev": "raid_bdev1", 00:28:46.691 "method": "bdev_raid_add_base_bdev", 00:28:46.691 "req_id": 1 00:28:46.691 } 00:28:46.691 Got JSON-RPC error response 00:28:46.691 response: 00:28:46.691 { 00:28:46.691 "code": -22, 00:28:46.691 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:46.691 } 00:28:46.691 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:28:46.691 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.691 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.691 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.691 09:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.628 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.887 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.887 "name": "raid_bdev1", 00:28:47.887 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:47.887 "strip_size_kb": 0, 00:28:47.887 "state": "online", 00:28:47.887 "raid_level": "raid1", 00:28:47.887 "superblock": true, 00:28:47.887 "num_base_bdevs": 2, 00:28:47.887 "num_base_bdevs_discovered": 1, 00:28:47.887 "num_base_bdevs_operational": 1, 00:28:47.887 "base_bdevs_list": [ 00:28:47.887 { 00:28:47.887 "name": null, 00:28:47.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.887 "is_configured": false, 00:28:47.887 "data_offset": 256, 00:28:47.887 "data_size": 7936 00:28:47.887 }, 00:28:47.887 { 00:28:47.887 "name": "BaseBdev2", 00:28:47.887 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:47.887 "is_configured": true, 00:28:47.887 "data_offset": 256, 00:28:47.887 "data_size": 7936 00:28:47.887 } 00:28:47.887 ] 00:28:47.887 }' 00:28:47.887 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.887 09:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.146 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.405 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:48.405 "name": "raid_bdev1", 00:28:48.405 "uuid": "4a2dbdc6-4290-11ef-a0af-c98d8ee52a94", 00:28:48.405 "strip_size_kb": 0, 00:28:48.405 "state": "online", 00:28:48.405 "raid_level": "raid1", 00:28:48.405 "superblock": true, 00:28:48.405 "num_base_bdevs": 2, 00:28:48.405 "num_base_bdevs_discovered": 1, 00:28:48.405 "num_base_bdevs_operational": 1, 00:28:48.405 "base_bdevs_list": [ 00:28:48.405 { 00:28:48.405 "name": null, 00:28:48.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.405 "is_configured": false, 00:28:48.405 "data_offset": 256, 00:28:48.405 "data_size": 7936 00:28:48.405 }, 00:28:48.405 { 00:28:48.405 "name": "BaseBdev2", 00:28:48.405 "uuid": "aa1e739f-ab7b-595f-bf77-73acf0d0b2e7", 00:28:48.405 "is_configured": true, 00:28:48.405 "data_offset": 256, 00:28:48.405 "data_size": 7936 00:28:48.405 } 00:28:48.405 ] 00:28:48.405 }' 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67217 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67217 ']' 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67217 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67217 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:28:48.406 killing process with pid 67217 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67217' 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 67217 00:28:48.406 Received shutdown signal, test time was about 60.000000 seconds 00:28:48.406 00:28:48.406 Latency(us) 00:28:48.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.406 =================================================================================================================== 00:28:48.406 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:48.406 [2024-07-15 09:55:16.338408] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:48.406 [2024-07-15 09:55:16.338447] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:48.406 [2024-07-15 09:55:16.338461] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:48.406 [2024-07-15 09:55:16.338465] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x97fa4435680 name raid_bdev1, state offline 00:28:48.406 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 67217 00:28:48.406 [2024-07-15 09:55:16.366088] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:48.665 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:28:48.665 00:28:48.665 real 0m23.407s 00:28:48.665 user 0m34.853s 00:28:48.665 sys 0m2.594s 00:28:48.665 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.665 09:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:48.665 ************************************ 00:28:48.665 END TEST raid_rebuild_test_sb_md_interleaved 00:28:48.665 ************************************ 00:28:48.665 09:55:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:48.665 09:55:16 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:28:48.665 09:55:16 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:28:48.665 09:55:16 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67217 ']' 00:28:48.665 09:55:16 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67217 00:28:48.665 09:55:16 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:28:48.665 00:28:48.665 real 10m11.394s 00:28:48.665 user 17m7.218s 00:28:48.665 sys 2m3.158s 00:28:48.665 09:55:16 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.665 ************************************ 00:28:48.665 09:55:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:48.665 END TEST bdev_raid 00:28:48.665 ************************************ 00:28:48.665 09:55:16 -- common/autotest_common.sh@1142 -- # return 0 00:28:48.665 09:55:16 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:28:48.665 09:55:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:48.665 09:55:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.665 09:55:16 -- common/autotest_common.sh@10 -- # set +x 00:28:48.665 ************************************ 00:28:48.665 START TEST bdevperf_config 00:28:48.665 ************************************ 00:28:48.665 09:55:16 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:28:48.924 * Looking for test storage... 00:28:48.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:28:48.924 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:48.924 09:55:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:48.925 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:48.925 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:48.925 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:48.925 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:48.925 09:55:16 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:52.216 09:55:19 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 09:55:16.975430] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:52.216 [2024-07-15 09:55:16.975733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:52.216 Using job config with 4 jobs 00:28:52.216 EAL: TSC is not safe to use in SMP mode 00:28:52.216 EAL: TSC is not invariant 00:28:52.216 [2024-07-15 09:55:17.405478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.216 [2024-07-15 09:55:17.534284] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:52.216 [2024-07-15 09:55:17.536964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.216 cpumask for '\''job0'\'' is too big 00:28:52.216 cpumask for '\''job1'\'' is too big 00:28:52.216 cpumask for '\''job2'\'' is too big 00:28:52.216 cpumask for '\''job3'\'' is too big 00:28:52.216 Running I/O for 2 seconds... 00:28:52.216 00:28:52.216 Latency(us) 00:28:52.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375451.35 366.65 0.00 0.00 681.59 181.47 1891.87 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375435.40 366.64 0.00 0.00 681.45 177.36 1852.46 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375453.01 366.65 0.00 0.00 681.26 179.83 1799.91 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375426.75 366.63 0.00 0.00 681.14 191.32 1537.15 00:28:52.216 =================================================================================================================== 00:28:52.216 Total : 1501766.52 1466.57 0.00 0.00 681.36 177.36 1891.87' 00:28:52.216 09:55:19 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 09:55:16.975430] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:52.216 [2024-07-15 09:55:16.975733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:52.216 Using job config with 4 jobs 00:28:52.216 EAL: TSC is not safe to use in SMP mode 00:28:52.216 EAL: TSC is not invariant 00:28:52.216 [2024-07-15 09:55:17.405478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.216 [2024-07-15 09:55:17.534284] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:52.216 [2024-07-15 09:55:17.536964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.216 cpumask for '\''job0'\'' is too big 00:28:52.216 cpumask for '\''job1'\'' is too big 00:28:52.216 cpumask for '\''job2'\'' is too big 00:28:52.216 cpumask for '\''job3'\'' is too big 00:28:52.216 Running I/O for 2 seconds... 00:28:52.216 00:28:52.216 Latency(us) 00:28:52.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375451.35 366.65 0.00 0.00 681.59 181.47 1891.87 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375435.40 366.64 0.00 0.00 681.45 177.36 1852.46 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375453.01 366.65 0.00 0.00 681.26 179.83 1799.91 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375426.75 366.63 0.00 0.00 681.14 191.32 1537.15 00:28:52.216 =================================================================================================================== 00:28:52.216 Total : 1501766.52 1466.57 0.00 0.00 681.36 177.36 1891.87' 00:28:52.216 09:55:19 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 09:55:16.975430] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:52.216 [2024-07-15 09:55:16.975733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:52.216 Using job config with 4 jobs 00:28:52.216 EAL: TSC is not safe to use in SMP mode 00:28:52.216 EAL: TSC is not invariant 00:28:52.216 [2024-07-15 09:55:17.405478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.216 [2024-07-15 09:55:17.534284] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:52.216 [2024-07-15 09:55:17.536964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.216 cpumask for '\''job0'\'' is too big 00:28:52.216 cpumask for '\''job1'\'' is too big 00:28:52.216 cpumask for '\''job2'\'' is too big 00:28:52.216 cpumask for '\''job3'\'' is too big 00:28:52.216 Running I/O for 2 seconds... 00:28:52.216 00:28:52.216 Latency(us) 00:28:52.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375451.35 366.65 0.00 0.00 681.59 181.47 1891.87 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375435.40 366.64 0.00 0.00 681.45 177.36 1852.46 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.216 Malloc0 : 2.00 375453.01 366.65 0.00 0.00 681.26 179.83 1799.91 00:28:52.216 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:52.217 Malloc0 : 2.00 375426.75 366.63 0.00 0.00 681.14 191.32 1537.15 00:28:52.217 =================================================================================================================== 00:28:52.217 Total : 1501766.52 1466.57 0.00 0.00 681.36 177.36 1891.87' 00:28:52.217 09:55:19 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:52.217 09:55:19 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:52.217 09:55:19 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:28:52.217 09:55:19 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:52.217 [2024-07-15 09:55:19.918714] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:52.217 [2024-07-15 09:55:19.918990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:52.820 EAL: TSC is not safe to use in SMP mode 00:28:52.820 EAL: TSC is not invariant 00:28:52.820 [2024-07-15 09:55:20.669809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.820 [2024-07-15 09:55:20.787716] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:52.820 [2024-07-15 09:55:20.790357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.820 cpumask for 'job0' is too big 00:28:52.820 cpumask for 'job1' is too big 00:28:52.820 cpumask for 'job2' is too big 00:28:52.820 cpumask for 'job3' is too big 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:28:55.350 Running I/O for 2 seconds... 00:28:55.350 00:28:55.350 Latency(us) 00:28:55.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.350 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:55.350 Malloc0 : 2.00 343153.25 335.11 0.00 0.00 745.74 183.93 1635.68 00:28:55.350 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:55.350 Malloc0 : 2.00 343138.34 335.10 0.00 0.00 745.58 172.44 1569.99 00:28:55.350 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:55.350 Malloc0 : 2.00 343114.85 335.07 0.00 0.00 745.46 170.79 1569.99 00:28:55.350 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:55.350 Malloc0 : 2.00 343186.51 335.14 0.00 0.00 745.14 73.08 1563.42 00:28:55.350 =================================================================================================================== 00:28:55.350 Total : 1372592.95 1340.42 0.00 0.00 745.48 73.08 1635.68' 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:55.350 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:55.350 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:55.350 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:55.350 09:55:23 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 09:55:23.143979] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:58.629 [2024-07-15 09:55:23.144202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:58.629 Using job config with 3 jobs 00:28:58.629 EAL: TSC is not safe to use in SMP mode 00:28:58.629 EAL: TSC is not invariant 00:28:58.629 [2024-07-15 09:55:23.862451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.629 [2024-07-15 09:55:23.972809] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:58.629 [2024-07-15 09:55:23.975443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.629 cpumask for '\''job0'\'' is too big 00:28:58.629 cpumask for '\''job1'\'' is too big 00:28:58.629 cpumask for '\''job2'\'' is too big 00:28:58.629 Running I/O for 2 seconds... 00:28:58.629 00:28:58.629 Latency(us) 00:28:58.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411754.74 402.10 0.00 0.00 621.47 275.90 1051.04 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411770.53 402.12 0.00 0.00 621.29 163.40 893.38 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411754.89 402.10 0.00 0.00 621.19 140.41 899.95 00:28:58.629 =================================================================================================================== 00:28:58.629 Total : 1235280.15 1206.33 0.00 0.00 621.32 140.41 1051.04' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 09:55:23.143979] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:58.629 [2024-07-15 09:55:23.144202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:58.629 Using job config with 3 jobs 00:28:58.629 EAL: TSC is not safe to use in SMP mode 00:28:58.629 EAL: TSC is not invariant 00:28:58.629 [2024-07-15 09:55:23.862451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.629 [2024-07-15 09:55:23.972809] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:58.629 [2024-07-15 09:55:23.975443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.629 cpumask for '\''job0'\'' is too big 00:28:58.629 cpumask for '\''job1'\'' is too big 00:28:58.629 cpumask for '\''job2'\'' is too big 00:28:58.629 Running I/O for 2 seconds... 00:28:58.629 00:28:58.629 Latency(us) 00:28:58.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411754.74 402.10 0.00 0.00 621.47 275.90 1051.04 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411770.53 402.12 0.00 0.00 621.29 163.40 893.38 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411754.89 402.10 0.00 0.00 621.19 140.41 899.95 00:28:58.629 =================================================================================================================== 00:28:58.629 Total : 1235280.15 1206.33 0.00 0.00 621.32 140.41 1051.04' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 09:55:23.143979] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:58.629 [2024-07-15 09:55:23.144202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:28:58.629 Using job config with 3 jobs 00:28:58.629 EAL: TSC is not safe to use in SMP mode 00:28:58.629 EAL: TSC is not invariant 00:28:58.629 [2024-07-15 09:55:23.862451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.629 [2024-07-15 09:55:23.972809] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:58.629 [2024-07-15 09:55:23.975443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.629 cpumask for '\''job0'\'' is too big 00:28:58.629 cpumask for '\''job1'\'' is too big 00:28:58.629 cpumask for '\''job2'\'' is too big 00:28:58.629 Running I/O for 2 seconds... 00:28:58.629 00:28:58.629 Latency(us) 00:28:58.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411754.74 402.10 0.00 0.00 621.47 275.90 1051.04 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411770.53 402.12 0.00 0.00 621.29 163.40 893.38 00:28:58.629 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:58.629 Malloc0 : 2.00 411754.89 402.10 0.00 0.00 621.19 140.41 899.95 00:28:58.629 =================================================================================================================== 00:28:58.629 Total : 1235280.15 1206.33 0.00 0.00 621.32 140.41 1051.04' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:28:58.629 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:58.629 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:58.629 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:58.629 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:58.629 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:58.629 09:55:26 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:01.162 09:55:29 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 09:55:26.357526] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:01.162 [2024-07-15 09:55:26.357831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:01.162 Using job config with 4 jobs 00:29:01.162 EAL: TSC is not safe to use in SMP mode 00:29:01.162 EAL: TSC is not invariant 00:29:01.162 [2024-07-15 09:55:26.801046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.162 [2024-07-15 09:55:26.910855] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:01.162 [2024-07-15 09:55:26.913543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.162 cpumask for '\''job0'\'' is too big 00:29:01.162 cpumask for '\''job1'\'' is too big 00:29:01.162 cpumask for '\''job2'\'' is too big 00:29:01.162 cpumask for '\''job3'\'' is too big 00:29:01.162 Running I/O for 2 seconds... 00:29:01.162 00:29:01.162 Latency(us) 00:29:01.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.162 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc0 : 2.00 152586.95 149.01 0.00 0.00 1677.37 617.49 3731.19 00:29:01.162 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc1 : 2.00 152577.12 149.00 0.00 0.00 1677.22 509.10 3757.47 00:29:01.162 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc0 : 2.00 152566.31 148.99 0.00 0.00 1676.59 505.81 3218.81 00:29:01.162 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc1 : 2.00 152557.02 148.98 0.00 0.00 1676.50 413.85 3271.36 00:29:01.162 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc0 : 2.00 152605.12 149.03 0.00 0.00 1675.38 568.22 2680.15 00:29:01.162 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc1 : 2.00 152595.15 149.02 0.00 0.00 1675.24 469.68 2667.01 00:29:01.162 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.162 Malloc0 : 2.00 152587.15 149.01 0.00 0.00 1674.71 469.68 2417.39 00:29:01.162 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.01 152574.14 149.00 0.00 0.00 1674.59 476.25 2417.39 00:29:01.163 =================================================================================================================== 00:29:01.163 Total : 1220648.94 1192.04 0.00 0.00 1675.95 413.85 3757.47' 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 09:55:26.357526] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:01.163 [2024-07-15 09:55:26.357831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:01.163 Using job config with 4 jobs 00:29:01.163 EAL: TSC is not safe to use in SMP mode 00:29:01.163 EAL: TSC is not invariant 00:29:01.163 [2024-07-15 09:55:26.801046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.163 [2024-07-15 09:55:26.910855] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:01.163 [2024-07-15 09:55:26.913543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.163 cpumask for '\''job0'\'' is too big 00:29:01.163 cpumask for '\''job1'\'' is too big 00:29:01.163 cpumask for '\''job2'\'' is too big 00:29:01.163 cpumask for '\''job3'\'' is too big 00:29:01.163 Running I/O for 2 seconds... 00:29:01.163 00:29:01.163 Latency(us) 00:29:01.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152586.95 149.01 0.00 0.00 1677.37 617.49 3731.19 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.00 152577.12 149.00 0.00 0.00 1677.22 509.10 3757.47 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152566.31 148.99 0.00 0.00 1676.59 505.81 3218.81 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.00 152557.02 148.98 0.00 0.00 1676.50 413.85 3271.36 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152605.12 149.03 0.00 0.00 1675.38 568.22 2680.15 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.00 152595.15 149.02 0.00 0.00 1675.24 469.68 2667.01 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152587.15 149.01 0.00 0.00 1674.71 469.68 2417.39 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.01 152574.14 149.00 0.00 0.00 1674.59 476.25 2417.39 00:29:01.163 =================================================================================================================== 00:29:01.163 Total : 1220648.94 1192.04 0.00 0.00 1675.95 413.85 3757.47' 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 09:55:26.357526] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:01.163 [2024-07-15 09:55:26.357831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:01.163 Using job config with 4 jobs 00:29:01.163 EAL: TSC is not safe to use in SMP mode 00:29:01.163 EAL: TSC is not invariant 00:29:01.163 [2024-07-15 09:55:26.801046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.163 [2024-07-15 09:55:26.910855] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:01.163 [2024-07-15 09:55:26.913543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.163 cpumask for '\''job0'\'' is too big 00:29:01.163 cpumask for '\''job1'\'' is too big 00:29:01.163 cpumask for '\''job2'\'' is too big 00:29:01.163 cpumask for '\''job3'\'' is too big 00:29:01.163 Running I/O for 2 seconds... 00:29:01.163 00:29:01.163 Latency(us) 00:29:01.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152586.95 149.01 0.00 0.00 1677.37 617.49 3731.19 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.00 152577.12 149.00 0.00 0.00 1677.22 509.10 3757.47 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152566.31 148.99 0.00 0.00 1676.59 505.81 3218.81 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.00 152557.02 148.98 0.00 0.00 1676.50 413.85 3271.36 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152605.12 149.03 0.00 0.00 1675.38 568.22 2680.15 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.00 152595.15 149.02 0.00 0.00 1675.24 469.68 2667.01 00:29:01.163 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc0 : 2.00 152587.15 149.01 0.00 0.00 1674.71 469.68 2417.39 00:29:01.163 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:01.163 Malloc1 : 2.01 152574.14 149.00 0.00 0.00 1674.59 476.25 2417.39 00:29:01.163 =================================================================================================================== 00:29:01.163 Total : 1220648.94 1192.04 0.00 0.00 1675.95 413.85 3757.47' 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:01.163 09:55:29 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:01.163 00:29:01.163 real 0m12.517s 00:29:01.163 user 0m9.827s 00:29:01.163 sys 0m2.771s 00:29:01.163 09:55:29 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.163 09:55:29 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:29:01.163 ************************************ 00:29:01.163 END TEST bdevperf_config 00:29:01.163 ************************************ 00:29:01.423 09:55:29 -- common/autotest_common.sh@1142 -- # return 0 00:29:01.423 09:55:29 -- spdk/autotest.sh@192 -- # uname -s 00:29:01.423 09:55:29 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:29:01.423 09:55:29 -- spdk/autotest.sh@198 -- # uname -s 00:29:01.423 09:55:29 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:29:01.423 09:55:29 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:29:01.423 09:55:29 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:01.423 09:55:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:01.423 09:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.423 09:55:29 -- common/autotest_common.sh@10 -- # set +x 00:29:01.423 ************************************ 00:29:01.423 START TEST blockdev_nvme 00:29:01.423 ************************************ 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:01.423 * Looking for test storage... 00:29:01.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:01.423 09:55:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67949 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 67949 00:29:01.423 09:55:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 67949 ']' 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.423 09:55:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.700 [2024-07-15 09:55:29.526724] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:01.700 [2024-07-15 09:55:29.527072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:02.296 EAL: TSC is not safe to use in SMP mode 00:29:02.296 EAL: TSC is not invariant 00:29:02.296 [2024-07-15 09:55:30.301795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.575 [2024-07-15 09:55:30.415250] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:02.575 [2024-07-15 09:55:30.417925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.575 09:55:30 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.575 09:55:30 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:29:02.575 09:55:30 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:29:02.575 09:55:30 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:29:02.575 09:55:30 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:29:02.575 09:55:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:02.575 09:55:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.834 [2024-07-15 09:55:30.704473] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5ef521c6-4290-11ef-a0af-c98d8ee52a94"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5ef521c6-4290-11ef-a0af-c98d8ee52a94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:29:02.834 09:55:30 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 67949 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 67949 ']' 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 67949 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 67949 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:29:02.834 killing process with pid 67949 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67949' 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 67949 00:29:02.834 09:55:30 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 67949 00:29:03.400 09:55:31 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:03.400 09:55:31 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:03.400 09:55:31 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:03.400 09:55:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.400 09:55:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:03.400 ************************************ 00:29:03.400 START TEST bdev_hello_world 00:29:03.400 ************************************ 00:29:03.400 09:55:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:03.400 [2024-07-15 09:55:31.308865] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:03.400 [2024-07-15 09:55:31.309157] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:03.967 EAL: TSC is not safe to use in SMP mode 00:29:03.967 EAL: TSC is not invariant 00:29:03.967 [2024-07-15 09:55:32.059651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.226 [2024-07-15 09:55:32.173535] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:04.226 [2024-07-15 09:55:32.176204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.226 [2024-07-15 09:55:32.237978] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:04.226 [2024-07-15 09:55:32.310062] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:04.226 [2024-07-15 09:55:32.310131] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:04.226 [2024-07-15 09:55:32.310143] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:04.226 [2024-07-15 09:55:32.310964] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:04.226 [2024-07-15 09:55:32.311297] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:04.226 [2024-07-15 09:55:32.311315] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:04.226 [2024-07-15 09:55:32.311548] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:04.226 00:29:04.226 [2024-07-15 09:55:32.311563] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:04.485 00:29:04.485 real 0m1.275s 00:29:04.485 user 0m0.467s 00:29:04.485 sys 0m0.809s 00:29:04.485 09:55:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:04.485 09:55:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:04.485 ************************************ 00:29:04.485 END TEST bdev_hello_world 00:29:04.485 ************************************ 00:29:04.744 09:55:32 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:04.744 09:55:32 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:29:04.744 09:55:32 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:04.744 09:55:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.744 09:55:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:04.744 ************************************ 00:29:04.744 START TEST bdev_bounds 00:29:04.744 ************************************ 00:29:04.744 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:29:04.744 09:55:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68020 00:29:04.744 09:55:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:04.744 09:55:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68020' 00:29:04.744 Process bdevio pid: 68020 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68020 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68020 ']' 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.745 09:55:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:04.745 [2024-07-15 09:55:32.645231] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:04.745 [2024-07-15 09:55:32.645524] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:05.311 EAL: TSC is not safe to use in SMP mode 00:29:05.311 EAL: TSC is not invariant 00:29:05.311 [2024-07-15 09:55:33.357878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.569 [2024-07-15 09:55:33.469424] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:05.569 [2024-07-15 09:55:33.469499] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:05.569 [2024-07-15 09:55:33.469507] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:29:05.569 [2024-07-15 09:55:33.473440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.569 [2024-07-15 09:55:33.473289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.569 [2024-07-15 09:55:33.473441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.569 [2024-07-15 09:55:33.535135] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:05.569 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.569 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:29:05.569 09:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:05.827 I/O targets: 00:29:05.827 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:05.827 00:29:05.827 00:29:05.827 CUnit - A unit testing framework for C - Version 2.1-3 00:29:05.827 http://cunit.sourceforge.net/ 00:29:05.827 00:29:05.827 00:29:05.827 Suite: bdevio tests on: Nvme0n1 00:29:05.827 Test: blockdev write read block ...passed 00:29:05.827 Test: blockdev write zeroes read block ...passed 00:29:05.827 Test: blockdev write zeroes read no split ...passed 00:29:05.827 Test: blockdev write zeroes read split ...passed 00:29:05.827 Test: blockdev write zeroes read split partial ...passed 00:29:05.827 Test: blockdev reset ...[2024-07-15 09:55:33.752278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:29:05.827 [2024-07-15 09:55:33.753813] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:05.827 passed 00:29:05.827 Test: blockdev write read 8 blocks ...passed 00:29:05.827 Test: blockdev write read size > 128k ...passed 00:29:05.827 Test: blockdev write read invalid size ...passed 00:29:05.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:05.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:05.827 Test: blockdev write read max offset ...passed 00:29:05.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:05.827 Test: blockdev writev readv 8 blocks ...passed 00:29:05.827 Test: blockdev writev readv 30 x 1block ...passed 00:29:05.827 Test: blockdev writev readv block ...passed 00:29:05.828 Test: blockdev writev readv size > 128k ...passed 00:29:05.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:05.828 Test: blockdev comparev and writev ...[2024-07-15 09:55:33.758965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f2718000 len:0x1000 00:29:05.828 [2024-07-15 09:55:33.759009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:05.828 passed 00:29:05.828 Test: blockdev nvme passthru rw ...passed 00:29:05.828 Test: blockdev nvme passthru vendor specific ...passed 00:29:05.828 Test: blockdev nvme admin passthru ...[2024-07-15 09:55:33.759617] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:05.828 [2024-07-15 09:55:33.759635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:05.828 passed 00:29:05.828 Test: blockdev copy ...passed 00:29:05.828 00:29:05.828 Run Summary: Type Total Ran Passed Failed Inactive 00:29:05.828 suites 1 1 n/a 0 0 00:29:05.828 tests 23 23 23 0 0 00:29:05.828 asserts 152 152 152 0 n/a 00:29:05.828 00:29:05.828 Elapsed time = 0.031 seconds 00:29:05.828 0 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68020 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68020 ']' 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68020 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 68020 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:29:05.828 killing process with pid 68020 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68020' 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68020 00:29:05.828 09:55:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68020 00:29:06.086 09:55:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:29:06.086 00:29:06.086 real 0m1.419s 00:29:06.086 user 0m1.896s 00:29:06.086 sys 0m0.854s 00:29:06.086 ************************************ 00:29:06.086 END TEST bdev_bounds 00:29:06.086 ************************************ 00:29:06.087 09:55:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.087 09:55:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:06.087 09:55:34 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:06.087 ************************************ 00:29:06.087 START TEST bdev_nbd 00:29:06.087 ************************************ 00:29:06.087 09:55:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:06.087 09:55:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:29:06.087 09:55:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:29:06.087 09:55:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:29:06.087 00:29:06.087 real 0m0.007s 00:29:06.087 user 0m0.004s 00:29:06.087 sys 0m0.003s 00:29:06.087 09:55:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.087 ************************************ 00:29:06.087 END TEST bdev_nbd 00:29:06.087 ************************************ 00:29:06.087 09:55:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:06.087 09:55:34 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:29:06.087 09:55:34 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:29:06.087 skipping fio tests on NVMe due to multi-ns failures. 00:29:06.087 09:55:34 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:06.087 09:55:34 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:06.087 09:55:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.087 09:55:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:06.087 ************************************ 00:29:06.087 START TEST bdev_verify 00:29:06.087 ************************************ 00:29:06.087 09:55:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:06.345 [2024-07-15 09:55:34.190825] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:06.345 [2024-07-15 09:55:34.191110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:06.911 EAL: TSC is not safe to use in SMP mode 00:29:06.912 EAL: TSC is not invariant 00:29:06.912 [2024-07-15 09:55:34.919151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.169 [2024-07-15 09:55:35.034828] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:07.169 [2024-07-15 09:55:35.034902] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:07.169 [2024-07-15 09:55:35.038209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.169 [2024-07-15 09:55:35.038203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.169 [2024-07-15 09:55:35.100232] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:07.169 Running I/O for 5 seconds... 00:29:12.436 00:29:12.436 Latency(us) 00:29:12.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.436 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:12.436 Verification LBA range: start 0x0 length 0xa0000 00:29:12.436 Nvme0n1 : 5.00 23344.96 91.19 0.00 0.00 5471.27 640.48 14294.14 00:29:12.436 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:12.436 Verification LBA range: start 0xa0000 length 0xa0000 00:29:12.436 Nvme0n1 : 5.00 22036.18 86.08 0.00 0.00 5796.28 627.34 13663.51 00:29:12.436 =================================================================================================================== 00:29:12.436 Total : 45381.14 177.27 0.00 0.00 5629.09 627.34 14294.14 00:29:13.811 00:29:13.811 real 0m7.414s 00:29:13.811 user 0m13.000s 00:29:13.811 sys 0m0.793s 00:29:13.811 ************************************ 00:29:13.811 END TEST bdev_verify 00:29:13.811 ************************************ 00:29:13.811 09:55:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.811 09:55:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:13.811 09:55:41 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:13.811 09:55:41 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:13.811 09:55:41 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:29:13.811 09:55:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.811 09:55:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:13.811 ************************************ 00:29:13.811 START TEST bdev_verify_big_io 00:29:13.811 ************************************ 00:29:13.811 09:55:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:13.811 [2024-07-15 09:55:41.667460] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:13.811 [2024-07-15 09:55:41.667783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:14.377 EAL: TSC is not safe to use in SMP mode 00:29:14.377 EAL: TSC is not invariant 00:29:14.377 [2024-07-15 09:55:42.392131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:14.633 [2024-07-15 09:55:42.506692] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:14.633 [2024-07-15 09:55:42.506771] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:14.633 [2024-07-15 09:55:42.509951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.633 [2024-07-15 09:55:42.509943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.633 [2024-07-15 09:55:42.571639] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:14.633 Running I/O for 5 seconds... 00:29:19.890 00:29:19.890 Latency(us) 00:29:19.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.890 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:19.890 Verification LBA range: start 0x0 length 0xa000 00:29:19.890 Nvme0n1 : 5.01 8210.86 513.18 0.00 0.00 15505.44 80.06 24594.33 00:29:19.890 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:19.890 Verification LBA range: start 0xa000 length 0xa000 00:29:19.890 Nvme0n1 : 5.01 8107.20 506.70 0.00 0.00 15702.34 63.64 35525.14 00:29:19.890 =================================================================================================================== 00:29:19.890 Total : 16318.06 1019.88 0.00 0.00 15603.29 63.64 35525.14 00:29:24.103 00:29:24.103 real 0m10.071s 00:29:24.103 user 0m18.329s 00:29:24.103 sys 0m0.777s 00:29:24.103 09:55:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.103 ************************************ 00:29:24.103 END TEST bdev_verify_big_io 00:29:24.103 ************************************ 00:29:24.103 09:55:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:24.103 09:55:51 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:24.103 09:55:51 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:24.103 09:55:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:29:24.103 09:55:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.103 09:55:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:24.103 ************************************ 00:29:24.103 START TEST bdev_write_zeroes 00:29:24.103 ************************************ 00:29:24.103 09:55:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:24.103 [2024-07-15 09:55:51.797634] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:24.103 [2024-07-15 09:55:51.797941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:24.362 EAL: TSC is not safe to use in SMP mode 00:29:24.362 EAL: TSC is not invariant 00:29:24.362 [2024-07-15 09:55:52.238933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.362 [2024-07-15 09:55:52.351797] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:24.362 [2024-07-15 09:55:52.354329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.362 [2024-07-15 09:55:52.416137] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:24.620 Running I/O for 1 seconds... 00:29:25.553 00:29:25.554 Latency(us) 00:29:25.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.554 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:25.554 Nvme0n1 : 1.00 71527.97 279.41 0.00 0.00 1787.71 489.39 17552.36 00:29:25.554 =================================================================================================================== 00:29:25.554 Total : 71527.97 279.41 0.00 0.00 1787.71 489.39 17552.36 00:29:25.813 00:29:25.813 real 0m1.972s 00:29:25.813 user 0m1.466s 00:29:25.813 sys 0m0.504s 00:29:25.813 ************************************ 00:29:25.813 END TEST bdev_write_zeroes 00:29:25.813 ************************************ 00:29:25.813 09:55:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.813 09:55:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:25.813 09:55:53 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:25.813 09:55:53 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:25.813 09:55:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:29:25.813 09:55:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.813 09:55:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:25.813 ************************************ 00:29:25.813 START TEST bdev_json_nonenclosed 00:29:25.813 ************************************ 00:29:25.813 09:55:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:25.813 [2024-07-15 09:55:53.825225] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:25.813 [2024-07-15 09:55:53.825491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:26.380 EAL: TSC is not safe to use in SMP mode 00:29:26.381 EAL: TSC is not invariant 00:29:26.381 [2024-07-15 09:55:54.254809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.381 [2024-07-15 09:55:54.368604] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:26.381 [2024-07-15 09:55:54.371179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.381 [2024-07-15 09:55:54.371217] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:26.381 [2024-07-15 09:55:54.371233] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:26.381 [2024-07-15 09:55:54.371240] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:26.693 00:29:26.693 real 0m0.721s 00:29:26.693 user 0m0.237s 00:29:26.693 sys 0m0.482s 00:29:26.693 09:55:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:29:26.693 09:55:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:26.693 ************************************ 00:29:26.693 END TEST bdev_json_nonenclosed 00:29:26.693 09:55:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:26.693 ************************************ 00:29:26.693 09:55:54 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:29:26.693 09:55:54 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:29:26.693 09:55:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:26.693 09:55:54 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:29:26.693 09:55:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.693 09:55:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:26.693 ************************************ 00:29:26.693 START TEST bdev_json_nonarray 00:29:26.693 ************************************ 00:29:26.693 09:55:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:26.693 [2024-07-15 09:55:54.601278] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:26.693 [2024-07-15 09:55:54.601588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:26.951 EAL: TSC is not safe to use in SMP mode 00:29:26.951 EAL: TSC is not invariant 00:29:26.951 [2024-07-15 09:55:55.037804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.210 [2024-07-15 09:55:55.153282] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:27.210 [2024-07-15 09:55:55.155872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.210 [2024-07-15 09:55:55.155918] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:27.210 [2024-07-15 09:55:55.155928] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:27.210 [2024-07-15 09:55:55.155936] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:27.469 00:29:27.469 real 0m0.732s 00:29:27.469 user 0m0.256s 00:29:27.469 sys 0m0.473s 00:29:27.469 09:55:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:29:27.469 09:55:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.469 09:55:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:27.469 ************************************ 00:29:27.469 END TEST bdev_json_nonarray 00:29:27.469 ************************************ 00:29:27.469 09:55:55 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:27.469 09:55:55 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:27.469 00:29:27.469 real 0m26.052s 00:29:27.469 user 0m37.512s 00:29:27.469 sys 0m6.025s 00:29:27.469 09:55:55 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.469 09:55:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:27.469 ************************************ 00:29:27.469 END TEST blockdev_nvme 00:29:27.469 ************************************ 00:29:27.469 09:55:55 -- common/autotest_common.sh@1142 -- # return 0 00:29:27.469 09:55:55 -- spdk/autotest.sh@213 -- # uname -s 00:29:27.469 09:55:55 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:29:27.469 09:55:55 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:29:27.469 09:55:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:27.469 09:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.469 09:55:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.469 ************************************ 00:29:27.469 START TEST nvme 00:29:27.469 ************************************ 00:29:27.469 09:55:55 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:29:27.729 * Looking for test storage... 00:29:27.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:27.729 09:55:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:27.729 hw.nic_uio.bdfs="0:16:0" 00:29:27.988 09:55:55 nvme -- nvme/nvme.sh@79 -- # uname 00:29:27.988 09:55:55 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:29:27.988 09:55:55 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:27.988 09:55:55 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:29:27.988 09:55:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.988 09:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:27.988 ************************************ 00:29:27.988 START TEST nvme_reset 00:29:27.988 ************************************ 00:29:27.988 09:55:55 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:28.247 EAL: TSC is not safe to use in SMP mode 00:29:28.247 EAL: TSC is not invariant 00:29:28.247 [2024-07-15 09:55:56.293829] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:28.247 Initializing NVMe Controllers 00:29:28.247 Skipping QEMU NVMe SSD at 0000:00:10.0 00:29:28.247 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:29:28.507 00:29:28.507 real 0m0.491s 00:29:28.507 user 0m0.019s 00:29:28.507 sys 0m0.471s 00:29:28.507 09:55:56 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:28.507 09:55:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:29:28.507 ************************************ 00:29:28.507 END TEST nvme_reset 00:29:28.507 ************************************ 00:29:28.507 09:55:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:28.507 09:55:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:29:28.507 09:55:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:28.507 09:55:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.507 09:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:28.507 ************************************ 00:29:28.507 START TEST nvme_identify 00:29:28.507 ************************************ 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:29:28.507 09:55:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:29:28.507 09:55:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:29:28.507 09:55:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:29:28.507 09:55:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:28.507 09:55:56 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:29:28.507 09:55:56 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:29:29.445 EAL: TSC is not safe to use in SMP mode 00:29:29.445 EAL: TSC is not invariant 00:29:29.445 [2024-07-15 09:55:57.199311] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:29.445 ===================================================== 00:29:29.445 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:29.445 ===================================================== 00:29:29.445 Controller Capabilities/Features 00:29:29.445 ================================ 00:29:29.445 Vendor ID: 1b36 00:29:29.445 Subsystem Vendor ID: 1af4 00:29:29.445 Serial Number: 12340 00:29:29.445 Model Number: QEMU NVMe Ctrl 00:29:29.445 Firmware Version: 8.0.0 00:29:29.445 Recommended Arb Burst: 6 00:29:29.445 IEEE OUI Identifier: 00 54 52 00:29:29.445 Multi-path I/O 00:29:29.445 May have multiple subsystem ports: No 00:29:29.445 May have multiple controllers: No 00:29:29.445 Associated with SR-IOV VF: No 00:29:29.445 Max Data Transfer Size: 524288 00:29:29.445 Max Number of Namespaces: 256 00:29:29.445 Max Number of I/O Queues: 64 00:29:29.445 NVMe Specification Version (VS): 1.4 00:29:29.445 NVMe Specification Version (Identify): 1.4 00:29:29.445 Maximum Queue Entries: 2048 00:29:29.445 Contiguous Queues Required: Yes 00:29:29.445 Arbitration Mechanisms Supported 00:29:29.445 Weighted Round Robin: Not Supported 00:29:29.445 Vendor Specific: Not Supported 00:29:29.445 Reset Timeout: 7500 ms 00:29:29.445 Doorbell Stride: 4 bytes 00:29:29.445 NVM Subsystem Reset: Not Supported 00:29:29.445 Command Sets Supported 00:29:29.445 NVM Command Set: Supported 00:29:29.445 Boot Partition: Not Supported 00:29:29.445 Memory Page Size Minimum: 4096 bytes 00:29:29.445 Memory Page Size Maximum: 65536 bytes 00:29:29.445 Persistent Memory Region: Not Supported 00:29:29.445 Optional Asynchronous Events Supported 00:29:29.445 Namespace Attribute Notices: Supported 00:29:29.445 Firmware Activation Notices: Not Supported 00:29:29.445 ANA Change Notices: Not Supported 00:29:29.445 PLE Aggregate Log Change Notices: Not Supported 00:29:29.445 LBA Status Info Alert Notices: Not Supported 00:29:29.445 EGE Aggregate Log Change Notices: Not Supported 00:29:29.445 Normal NVM Subsystem Shutdown event: Not Supported 00:29:29.445 Zone Descriptor Change Notices: Not Supported 00:29:29.446 Discovery Log Change Notices: Not Supported 00:29:29.446 Controller Attributes 00:29:29.446 128-bit Host Identifier: Not Supported 00:29:29.446 Non-Operational Permissive Mode: Not Supported 00:29:29.446 NVM Sets: Not Supported 00:29:29.446 Read Recovery Levels: Not Supported 00:29:29.446 Endurance Groups: Not Supported 00:29:29.446 Predictable Latency Mode: Not Supported 00:29:29.446 Traffic Based Keep ALive: Not Supported 00:29:29.446 Namespace Granularity: Not Supported 00:29:29.446 SQ Associations: Not Supported 00:29:29.446 UUID List: Not Supported 00:29:29.446 Multi-Domain Subsystem: Not Supported 00:29:29.446 Fixed Capacity Management: Not Supported 00:29:29.446 Variable Capacity Management: Not Supported 00:29:29.446 Delete Endurance Group: Not Supported 00:29:29.446 Delete NVM Set: Not Supported 00:29:29.446 Extended LBA Formats Supported: Supported 00:29:29.446 Flexible Data Placement Supported: Not Supported 00:29:29.446 00:29:29.446 Controller Memory Buffer Support 00:29:29.446 ================================ 00:29:29.446 Supported: No 00:29:29.446 00:29:29.446 Persistent Memory Region Support 00:29:29.446 ================================ 00:29:29.446 Supported: No 00:29:29.446 00:29:29.446 Admin Command Set Attributes 00:29:29.446 ============================ 00:29:29.446 Security Send/Receive: Not Supported 00:29:29.446 Format NVM: Supported 00:29:29.446 Firmware Activate/Download: Not Supported 00:29:29.446 Namespace Management: Supported 00:29:29.446 Device Self-Test: Not Supported 00:29:29.446 Directives: Supported 00:29:29.446 NVMe-MI: Not Supported 00:29:29.446 Virtualization Management: Not Supported 00:29:29.446 Doorbell Buffer Config: Supported 00:29:29.446 Get LBA Status Capability: Not Supported 00:29:29.446 Command & Feature Lockdown Capability: Not Supported 00:29:29.446 Abort Command Limit: 4 00:29:29.446 Async Event Request Limit: 4 00:29:29.446 Number of Firmware Slots: N/A 00:29:29.446 Firmware Slot 1 Read-Only: N/A 00:29:29.446 Firmware Activation Without Reset: N/A 00:29:29.446 Multiple Update Detection Support: N/A 00:29:29.446 Firmware Update Granularity: No Information Provided 00:29:29.446 Per-Namespace SMART Log: Yes 00:29:29.446 Asymmetric Namespace Access Log Page: Not Supported 00:29:29.446 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:29.446 Command Effects Log Page: Supported 00:29:29.446 Get Log Page Extended Data: Supported 00:29:29.446 Telemetry Log Pages: Not Supported 00:29:29.446 Persistent Event Log Pages: Not Supported 00:29:29.446 Supported Log Pages Log Page: May Support 00:29:29.446 Commands Supported & Effects Log Page: Not Supported 00:29:29.446 Feature Identifiers & Effects Log Page:May Support 00:29:29.446 NVMe-MI Commands & Effects Log Page: May Support 00:29:29.446 Data Area 4 for Telemetry Log: Not Supported 00:29:29.446 Error Log Page Entries Supported: 1 00:29:29.446 Keep Alive: Not Supported 00:29:29.446 00:29:29.446 NVM Command Set Attributes 00:29:29.446 ========================== 00:29:29.446 Submission Queue Entry Size 00:29:29.446 Max: 64 00:29:29.446 Min: 64 00:29:29.446 Completion Queue Entry Size 00:29:29.446 Max: 16 00:29:29.446 Min: 16 00:29:29.446 Number of Namespaces: 256 00:29:29.446 Compare Command: Supported 00:29:29.446 Write Uncorrectable Command: Not Supported 00:29:29.446 Dataset Management Command: Supported 00:29:29.446 Write Zeroes Command: Supported 00:29:29.446 Set Features Save Field: Supported 00:29:29.446 Reservations: Not Supported 00:29:29.446 Timestamp: Supported 00:29:29.446 Copy: Supported 00:29:29.446 Volatile Write Cache: Present 00:29:29.446 Atomic Write Unit (Normal): 1 00:29:29.446 Atomic Write Unit (PFail): 1 00:29:29.446 Atomic Compare & Write Unit: 1 00:29:29.446 Fused Compare & Write: Not Supported 00:29:29.446 Scatter-Gather List 00:29:29.446 SGL Command Set: Supported 00:29:29.446 SGL Keyed: Not Supported 00:29:29.446 SGL Bit Bucket Descriptor: Not Supported 00:29:29.446 SGL Metadata Pointer: Not Supported 00:29:29.446 Oversized SGL: Not Supported 00:29:29.446 SGL Metadata Address: Not Supported 00:29:29.446 SGL Offset: Not Supported 00:29:29.446 Transport SGL Data Block: Not Supported 00:29:29.446 Replay Protected Memory Block: Not Supported 00:29:29.446 00:29:29.446 Firmware Slot Information 00:29:29.446 ========================= 00:29:29.446 Active slot: 1 00:29:29.446 Slot 1 Firmware Revision: 1.0 00:29:29.446 00:29:29.446 00:29:29.446 Commands Supported and Effects 00:29:29.446 ============================== 00:29:29.446 Admin Commands 00:29:29.446 -------------- 00:29:29.446 Delete I/O Submission Queue (00h): Supported 00:29:29.446 Create I/O Submission Queue (01h): Supported 00:29:29.446 Get Log Page (02h): Supported 00:29:29.446 Delete I/O Completion Queue (04h): Supported 00:29:29.446 Create I/O Completion Queue (05h): Supported 00:29:29.446 Identify (06h): Supported 00:29:29.446 Abort (08h): Supported 00:29:29.446 Set Features (09h): Supported 00:29:29.446 Get Features (0Ah): Supported 00:29:29.446 Asynchronous Event Request (0Ch): Supported 00:29:29.446 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:29.446 Directive Send (19h): Supported 00:29:29.446 Directive Receive (1Ah): Supported 00:29:29.446 Virtualization Management (1Ch): Supported 00:29:29.446 Doorbell Buffer Config (7Ch): Supported 00:29:29.446 Format NVM (80h): Supported LBA-Change 00:29:29.446 I/O Commands 00:29:29.446 ------------ 00:29:29.446 Flush (00h): Supported LBA-Change 00:29:29.446 Write (01h): Supported LBA-Change 00:29:29.446 Read (02h): Supported 00:29:29.446 Compare (05h): Supported 00:29:29.446 Write Zeroes (08h): Supported LBA-Change 00:29:29.446 Dataset Management (09h): Supported LBA-Change 00:29:29.446 Unknown (0Ch): Supported 00:29:29.446 Unknown (12h): Supported 00:29:29.446 Copy (19h): Supported LBA-Change 00:29:29.446 Unknown (1Dh): Supported LBA-Change 00:29:29.446 00:29:29.446 Error Log 00:29:29.446 ========= 00:29:29.446 00:29:29.446 Arbitration 00:29:29.446 =========== 00:29:29.446 Arbitration Burst: no limit 00:29:29.446 00:29:29.446 Power Management 00:29:29.446 ================ 00:29:29.446 Number of Power States: 1 00:29:29.446 Current Power State: Power State #0 00:29:29.446 Power State #0: 00:29:29.446 Max Power: 25.00 W 00:29:29.446 Non-Operational State: Operational 00:29:29.446 Entry Latency: 16 microseconds 00:29:29.446 Exit Latency: 4 microseconds 00:29:29.446 Relative Read Throughput: 0 00:29:29.446 Relative Read Latency: 0 00:29:29.446 Relative Write Throughput: 0 00:29:29.446 Relative Write Latency: 0 00:29:29.446 Idle Power: Not Reported 00:29:29.446 Active Power: Not Reported 00:29:29.447 Non-Operational Permissive Mode: Not Supported 00:29:29.447 00:29:29.447 Health Information 00:29:29.447 ================== 00:29:29.447 Critical Warnings: 00:29:29.447 Available Spare Space: OK 00:29:29.447 Temperature: OK 00:29:29.447 Device Reliability: OK 00:29:29.447 Read Only: No 00:29:29.447 Volatile Memory Backup: OK 00:29:29.447 Current Temperature: 323 Kelvin (50 Celsius) 00:29:29.447 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:29.447 Available Spare: 0% 00:29:29.447 Available Spare Threshold: 0% 00:29:29.447 Life Percentage Used: 0% 00:29:29.447 Data Units Read: 12296 00:29:29.447 Data Units Written: 12280 00:29:29.447 Host Read Commands: 309019 00:29:29.447 Host Write Commands: 308868 00:29:29.447 Controller Busy Time: 0 minutes 00:29:29.447 Power Cycles: 0 00:29:29.447 Power On Hours: 0 hours 00:29:29.447 Unsafe Shutdowns: 0 00:29:29.447 Unrecoverable Media Errors: 0 00:29:29.447 Lifetime Error Log Entries: 0 00:29:29.447 Warning Temperature Time: 0 minutes 00:29:29.447 Critical Temperature Time: 0 minutes 00:29:29.447 00:29:29.447 Number of Queues 00:29:29.447 ================ 00:29:29.447 Number of I/O Submission Queues: 64 00:29:29.447 Number of I/O Completion Queues: 64 00:29:29.447 00:29:29.447 ZNS Specific Controller Data 00:29:29.447 ============================ 00:29:29.447 Zone Append Size Limit: 0 00:29:29.447 00:29:29.447 00:29:29.447 Active Namespaces 00:29:29.447 ================= 00:29:29.447 Namespace ID:1 00:29:29.447 Error Recovery Timeout: Unlimited 00:29:29.447 Command Set Identifier: NVM (00h) 00:29:29.447 Deallocate: Supported 00:29:29.447 Deallocated/Unwritten Error: Supported 00:29:29.447 Deallocated Read Value: All 0x00 00:29:29.447 Deallocate in Write Zeroes: Not Supported 00:29:29.447 Deallocated Guard Field: 0xFFFF 00:29:29.447 Flush: Supported 00:29:29.447 Reservation: Not Supported 00:29:29.447 Namespace Sharing Capabilities: Private 00:29:29.447 Size (in LBAs): 1310720 (5GiB) 00:29:29.447 Capacity (in LBAs): 1310720 (5GiB) 00:29:29.447 Utilization (in LBAs): 1310720 (5GiB) 00:29:29.447 Thin Provisioning: Not Supported 00:29:29.447 Per-NS Atomic Units: No 00:29:29.447 Maximum Single Source Range Length: 128 00:29:29.447 Maximum Copy Length: 128 00:29:29.447 Maximum Source Range Count: 128 00:29:29.447 NGUID/EUI64 Never Reused: No 00:29:29.447 Namespace Write Protected: No 00:29:29.447 Number of LBA Formats: 8 00:29:29.447 Current LBA Format: LBA Format #04 00:29:29.447 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:29.447 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:29.447 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:29.447 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:29.447 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:29.447 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:29.447 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:29.447 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:29.447 00:29:29.447 NVM Specific Namespace Data 00:29:29.447 =========================== 00:29:29.447 Logical Block Storage Tag Mask: 0 00:29:29.447 Protection Information Capabilities: 00:29:29.447 16b Guard Protection Information Storage Tag Support: No 00:29:29.447 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:29:29.447 Storage Tag Check Read Support: No 00:29:29.447 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.447 09:55:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:29:29.447 09:55:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:29.717 EAL: TSC is not safe to use in SMP mode 00:29:29.717 EAL: TSC is not invariant 00:29:29.717 [2024-07-15 09:55:57.699339] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:29.717 ===================================================== 00:29:29.717 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:29.717 ===================================================== 00:29:29.717 Controller Capabilities/Features 00:29:29.717 ================================ 00:29:29.717 Vendor ID: 1b36 00:29:29.717 Subsystem Vendor ID: 1af4 00:29:29.717 Serial Number: 12340 00:29:29.717 Model Number: QEMU NVMe Ctrl 00:29:29.717 Firmware Version: 8.0.0 00:29:29.717 Recommended Arb Burst: 6 00:29:29.717 IEEE OUI Identifier: 00 54 52 00:29:29.717 Multi-path I/O 00:29:29.717 May have multiple subsystem ports: No 00:29:29.717 May have multiple controllers: No 00:29:29.717 Associated with SR-IOV VF: No 00:29:29.717 Max Data Transfer Size: 524288 00:29:29.717 Max Number of Namespaces: 256 00:29:29.717 Max Number of I/O Queues: 64 00:29:29.717 NVMe Specification Version (VS): 1.4 00:29:29.717 NVMe Specification Version (Identify): 1.4 00:29:29.717 Maximum Queue Entries: 2048 00:29:29.717 Contiguous Queues Required: Yes 00:29:29.717 Arbitration Mechanisms Supported 00:29:29.717 Weighted Round Robin: Not Supported 00:29:29.717 Vendor Specific: Not Supported 00:29:29.717 Reset Timeout: 7500 ms 00:29:29.717 Doorbell Stride: 4 bytes 00:29:29.717 NVM Subsystem Reset: Not Supported 00:29:29.717 Command Sets Supported 00:29:29.717 NVM Command Set: Supported 00:29:29.717 Boot Partition: Not Supported 00:29:29.717 Memory Page Size Minimum: 4096 bytes 00:29:29.717 Memory Page Size Maximum: 65536 bytes 00:29:29.717 Persistent Memory Region: Not Supported 00:29:29.717 Optional Asynchronous Events Supported 00:29:29.717 Namespace Attribute Notices: Supported 00:29:29.717 Firmware Activation Notices: Not Supported 00:29:29.717 ANA Change Notices: Not Supported 00:29:29.717 PLE Aggregate Log Change Notices: Not Supported 00:29:29.717 LBA Status Info Alert Notices: Not Supported 00:29:29.717 EGE Aggregate Log Change Notices: Not Supported 00:29:29.717 Normal NVM Subsystem Shutdown event: Not Supported 00:29:29.717 Zone Descriptor Change Notices: Not Supported 00:29:29.717 Discovery Log Change Notices: Not Supported 00:29:29.717 Controller Attributes 00:29:29.717 128-bit Host Identifier: Not Supported 00:29:29.717 Non-Operational Permissive Mode: Not Supported 00:29:29.717 NVM Sets: Not Supported 00:29:29.717 Read Recovery Levels: Not Supported 00:29:29.717 Endurance Groups: Not Supported 00:29:29.717 Predictable Latency Mode: Not Supported 00:29:29.717 Traffic Based Keep ALive: Not Supported 00:29:29.717 Namespace Granularity: Not Supported 00:29:29.717 SQ Associations: Not Supported 00:29:29.717 UUID List: Not Supported 00:29:29.717 Multi-Domain Subsystem: Not Supported 00:29:29.717 Fixed Capacity Management: Not Supported 00:29:29.717 Variable Capacity Management: Not Supported 00:29:29.717 Delete Endurance Group: Not Supported 00:29:29.717 Delete NVM Set: Not Supported 00:29:29.717 Extended LBA Formats Supported: Supported 00:29:29.717 Flexible Data Placement Supported: Not Supported 00:29:29.717 00:29:29.717 Controller Memory Buffer Support 00:29:29.717 ================================ 00:29:29.717 Supported: No 00:29:29.717 00:29:29.717 Persistent Memory Region Support 00:29:29.717 ================================ 00:29:29.717 Supported: No 00:29:29.717 00:29:29.717 Admin Command Set Attributes 00:29:29.717 ============================ 00:29:29.717 Security Send/Receive: Not Supported 00:29:29.717 Format NVM: Supported 00:29:29.717 Firmware Activate/Download: Not Supported 00:29:29.717 Namespace Management: Supported 00:29:29.717 Device Self-Test: Not Supported 00:29:29.717 Directives: Supported 00:29:29.717 NVMe-MI: Not Supported 00:29:29.717 Virtualization Management: Not Supported 00:29:29.717 Doorbell Buffer Config: Supported 00:29:29.717 Get LBA Status Capability: Not Supported 00:29:29.717 Command & Feature Lockdown Capability: Not Supported 00:29:29.717 Abort Command Limit: 4 00:29:29.717 Async Event Request Limit: 4 00:29:29.717 Number of Firmware Slots: N/A 00:29:29.717 Firmware Slot 1 Read-Only: N/A 00:29:29.717 Firmware Activation Without Reset: N/A 00:29:29.717 Multiple Update Detection Support: N/A 00:29:29.717 Firmware Update Granularity: No Information Provided 00:29:29.717 Per-Namespace SMART Log: Yes 00:29:29.717 Asymmetric Namespace Access Log Page: Not Supported 00:29:29.717 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:29.717 Command Effects Log Page: Supported 00:29:29.717 Get Log Page Extended Data: Supported 00:29:29.717 Telemetry Log Pages: Not Supported 00:29:29.717 Persistent Event Log Pages: Not Supported 00:29:29.717 Supported Log Pages Log Page: May Support 00:29:29.717 Commands Supported & Effects Log Page: Not Supported 00:29:29.718 Feature Identifiers & Effects Log Page:May Support 00:29:29.718 NVMe-MI Commands & Effects Log Page: May Support 00:29:29.718 Data Area 4 for Telemetry Log: Not Supported 00:29:29.718 Error Log Page Entries Supported: 1 00:29:29.718 Keep Alive: Not Supported 00:29:29.718 00:29:29.718 NVM Command Set Attributes 00:29:29.718 ========================== 00:29:29.718 Submission Queue Entry Size 00:29:29.718 Max: 64 00:29:29.718 Min: 64 00:29:29.718 Completion Queue Entry Size 00:29:29.718 Max: 16 00:29:29.718 Min: 16 00:29:29.718 Number of Namespaces: 256 00:29:29.718 Compare Command: Supported 00:29:29.718 Write Uncorrectable Command: Not Supported 00:29:29.718 Dataset Management Command: Supported 00:29:29.718 Write Zeroes Command: Supported 00:29:29.718 Set Features Save Field: Supported 00:29:29.718 Reservations: Not Supported 00:29:29.718 Timestamp: Supported 00:29:29.718 Copy: Supported 00:29:29.718 Volatile Write Cache: Present 00:29:29.718 Atomic Write Unit (Normal): 1 00:29:29.718 Atomic Write Unit (PFail): 1 00:29:29.718 Atomic Compare & Write Unit: 1 00:29:29.718 Fused Compare & Write: Not Supported 00:29:29.718 Scatter-Gather List 00:29:29.718 SGL Command Set: Supported 00:29:29.718 SGL Keyed: Not Supported 00:29:29.718 SGL Bit Bucket Descriptor: Not Supported 00:29:29.718 SGL Metadata Pointer: Not Supported 00:29:29.718 Oversized SGL: Not Supported 00:29:29.718 SGL Metadata Address: Not Supported 00:29:29.718 SGL Offset: Not Supported 00:29:29.718 Transport SGL Data Block: Not Supported 00:29:29.718 Replay Protected Memory Block: Not Supported 00:29:29.718 00:29:29.718 Firmware Slot Information 00:29:29.718 ========================= 00:29:29.718 Active slot: 1 00:29:29.718 Slot 1 Firmware Revision: 1.0 00:29:29.718 00:29:29.718 00:29:29.718 Commands Supported and Effects 00:29:29.718 ============================== 00:29:29.718 Admin Commands 00:29:29.718 -------------- 00:29:29.718 Delete I/O Submission Queue (00h): Supported 00:29:29.718 Create I/O Submission Queue (01h): Supported 00:29:29.718 Get Log Page (02h): Supported 00:29:29.718 Delete I/O Completion Queue (04h): Supported 00:29:29.718 Create I/O Completion Queue (05h): Supported 00:29:29.718 Identify (06h): Supported 00:29:29.718 Abort (08h): Supported 00:29:29.718 Set Features (09h): Supported 00:29:29.718 Get Features (0Ah): Supported 00:29:29.718 Asynchronous Event Request (0Ch): Supported 00:29:29.718 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:29.718 Directive Send (19h): Supported 00:29:29.718 Directive Receive (1Ah): Supported 00:29:29.718 Virtualization Management (1Ch): Supported 00:29:29.718 Doorbell Buffer Config (7Ch): Supported 00:29:29.718 Format NVM (80h): Supported LBA-Change 00:29:29.718 I/O Commands 00:29:29.718 ------------ 00:29:29.718 Flush (00h): Supported LBA-Change 00:29:29.718 Write (01h): Supported LBA-Change 00:29:29.718 Read (02h): Supported 00:29:29.718 Compare (05h): Supported 00:29:29.718 Write Zeroes (08h): Supported LBA-Change 00:29:29.718 Dataset Management (09h): Supported LBA-Change 00:29:29.718 Unknown (0Ch): Supported 00:29:29.718 Unknown (12h): Supported 00:29:29.718 Copy (19h): Supported LBA-Change 00:29:29.718 Unknown (1Dh): Supported LBA-Change 00:29:29.718 00:29:29.718 Error Log 00:29:29.718 ========= 00:29:29.718 00:29:29.718 Arbitration 00:29:29.718 =========== 00:29:29.718 Arbitration Burst: no limit 00:29:29.718 00:29:29.718 Power Management 00:29:29.718 ================ 00:29:29.718 Number of Power States: 1 00:29:29.718 Current Power State: Power State #0 00:29:29.718 Power State #0: 00:29:29.718 Max Power: 25.00 W 00:29:29.718 Non-Operational State: Operational 00:29:29.718 Entry Latency: 16 microseconds 00:29:29.718 Exit Latency: 4 microseconds 00:29:29.718 Relative Read Throughput: 0 00:29:29.718 Relative Read Latency: 0 00:29:29.718 Relative Write Throughput: 0 00:29:29.718 Relative Write Latency: 0 00:29:29.718 Idle Power: Not Reported 00:29:29.718 Active Power: Not Reported 00:29:29.718 Non-Operational Permissive Mode: Not Supported 00:29:29.718 00:29:29.718 Health Information 00:29:29.718 ================== 00:29:29.718 Critical Warnings: 00:29:29.718 Available Spare Space: OK 00:29:29.718 Temperature: OK 00:29:29.718 Device Reliability: OK 00:29:29.718 Read Only: No 00:29:29.718 Volatile Memory Backup: OK 00:29:29.718 Current Temperature: 323 Kelvin (50 Celsius) 00:29:29.718 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:29.718 Available Spare: 0% 00:29:29.718 Available Spare Threshold: 0% 00:29:29.718 Life Percentage Used: 0% 00:29:29.718 Data Units Read: 12296 00:29:29.718 Data Units Written: 12280 00:29:29.718 Host Read Commands: 309019 00:29:29.718 Host Write Commands: 308868 00:29:29.718 Controller Busy Time: 0 minutes 00:29:29.718 Power Cycles: 0 00:29:29.718 Power On Hours: 0 hours 00:29:29.718 Unsafe Shutdowns: 0 00:29:29.718 Unrecoverable Media Errors: 0 00:29:29.718 Lifetime Error Log Entries: 0 00:29:29.718 Warning Temperature Time: 0 minutes 00:29:29.718 Critical Temperature Time: 0 minutes 00:29:29.718 00:29:29.718 Number of Queues 00:29:29.718 ================ 00:29:29.718 Number of I/O Submission Queues: 64 00:29:29.718 Number of I/O Completion Queues: 64 00:29:29.718 00:29:29.718 ZNS Specific Controller Data 00:29:29.718 ============================ 00:29:29.718 Zone Append Size Limit: 0 00:29:29.718 00:29:29.718 00:29:29.719 Active Namespaces 00:29:29.719 ================= 00:29:29.719 Namespace ID:1 00:29:29.719 Error Recovery Timeout: Unlimited 00:29:29.719 Command Set Identifier: NVM (00h) 00:29:29.719 Deallocate: Supported 00:29:29.719 Deallocated/Unwritten Error: Supported 00:29:29.719 Deallocated Read Value: All 0x00 00:29:29.719 Deallocate in Write Zeroes: Not Supported 00:29:29.719 Deallocated Guard Field: 0xFFFF 00:29:29.719 Flush: Supported 00:29:29.719 Reservation: Not Supported 00:29:29.719 Namespace Sharing Capabilities: Private 00:29:29.719 Size (in LBAs): 1310720 (5GiB) 00:29:29.719 Capacity (in LBAs): 1310720 (5GiB) 00:29:29.719 Utilization (in LBAs): 1310720 (5GiB) 00:29:29.719 Thin Provisioning: Not Supported 00:29:29.719 Per-NS Atomic Units: No 00:29:29.719 Maximum Single Source Range Length: 128 00:29:29.719 Maximum Copy Length: 128 00:29:29.719 Maximum Source Range Count: 128 00:29:29.719 NGUID/EUI64 Never Reused: No 00:29:29.719 Namespace Write Protected: No 00:29:29.719 Number of LBA Formats: 8 00:29:29.719 Current LBA Format: LBA Format #04 00:29:29.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:29.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:29.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:29.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:29.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:29.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:29.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:29.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:29.719 00:29:29.719 NVM Specific Namespace Data 00:29:29.719 =========================== 00:29:29.719 Logical Block Storage Tag Mask: 0 00:29:29.719 Protection Information Capabilities: 00:29:29.719 16b Guard Protection Information Storage Tag Support: No 00:29:29.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:29:29.719 Storage Tag Check Read Support: No 00:29:29.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:29:29.719 00:29:29.719 real 0m1.357s 00:29:29.719 user 0m0.068s 00:29:29.719 sys 0m1.308s 00:29:29.719 09:55:57 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.719 09:55:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.719 ************************************ 00:29:29.719 END TEST nvme_identify 00:29:29.719 ************************************ 00:29:30.003 09:55:57 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:30.003 09:55:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:29:30.003 09:55:57 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:30.003 09:55:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.003 09:55:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:30.003 ************************************ 00:29:30.003 START TEST nvme_perf 00:29:30.003 ************************************ 00:29:30.003 09:55:57 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:29:30.003 09:55:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:29:30.571 EAL: TSC is not safe to use in SMP mode 00:29:30.571 EAL: TSC is not invariant 00:29:30.571 [2024-07-15 09:55:58.545349] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:31.507 Initializing NVMe Controllers 00:29:31.507 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:31.507 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:29:31.507 Initialization complete. Launching workers. 00:29:31.507 ======================================================== 00:29:31.507 Latency(us) 00:29:31.507 Device Information : IOPS MiB/s Average min max 00:29:31.507 PCIE (0000:00:10.0) NSID 1 from core 0: 93467.00 1095.32 1369.54 332.91 5909.76 00:29:31.507 ======================================================== 00:29:31.507 Total : 93467.00 1095.32 1369.54 332.91 5909.76 00:29:31.507 00:29:31.507 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:31.507 ================================================================================= 00:29:31.507 1.00000% : 1202.127us 00:29:31.507 10.00000% : 1248.110us 00:29:31.507 25.00000% : 1280.954us 00:29:31.507 50.00000% : 1326.937us 00:29:31.507 75.00000% : 1372.920us 00:29:31.507 90.00000% : 1458.317us 00:29:31.507 95.00000% : 1569.990us 00:29:31.507 98.00000% : 1996.975us 00:29:31.507 99.00000% : 2614.461us 00:29:31.507 99.50000% : 3310.775us 00:29:31.507 99.90000% : 5334.026us 00:29:31.507 99.99000% : 5859.546us 00:29:31.507 99.99900% : 5912.098us 00:29:31.507 99.99990% : 5912.098us 00:29:31.507 99.99999% : 5912.098us 00:29:31.507 00:29:31.507 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:31.507 ============================================================================== 00:29:31.507 Range in us Cumulative IO count 00:29:31.507 331.734 - 333.377: 0.0011% ( 1) 00:29:31.507 349.799 - 351.441: 0.0021% ( 1) 00:29:31.507 351.441 - 353.084: 0.0032% ( 1) 00:29:31.507 354.726 - 356.368: 0.0043% ( 1) 00:29:31.507 476.252 - 479.537: 0.0075% ( 3) 00:29:31.507 479.537 - 482.821: 0.0107% ( 3) 00:29:31.507 482.821 - 486.106: 0.0150% ( 4) 00:29:31.507 486.106 - 489.390: 0.0182% ( 3) 00:29:31.507 489.390 - 492.675: 0.0225% ( 4) 00:29:31.507 492.675 - 495.959: 0.0257% ( 3) 00:29:31.507 495.959 - 499.244: 0.0300% ( 4) 00:29:31.507 499.244 - 502.528: 0.0332% ( 3) 00:29:31.507 502.528 - 505.813: 0.0374% ( 4) 00:29:31.507 505.813 - 509.097: 0.0396% ( 2) 00:29:31.507 509.097 - 512.382: 0.0407% ( 1) 00:29:31.507 1097.023 - 1103.592: 0.0417% ( 1) 00:29:31.507 1103.592 - 1110.161: 0.0428% ( 1) 00:29:31.507 1110.161 - 1116.730: 0.0439% ( 1) 00:29:31.507 1116.730 - 1123.299: 0.0449% ( 1) 00:29:31.507 1123.299 - 1129.868: 0.0471% ( 2) 00:29:31.507 1129.868 - 1136.437: 0.0481% ( 1) 00:29:31.507 1136.437 - 1143.006: 0.0492% ( 1) 00:29:31.507 1143.006 - 1149.575: 0.0514% ( 2) 00:29:31.507 1149.575 - 1156.144: 0.0556% ( 4) 00:29:31.507 1156.144 - 1162.713: 0.0642% ( 8) 00:29:31.507 1162.713 - 1169.282: 0.0867% ( 21) 00:29:31.507 1169.282 - 1175.851: 0.1476% ( 57) 00:29:31.507 1175.851 - 1182.420: 0.2354% ( 82) 00:29:31.507 1182.420 - 1188.989: 0.4001% ( 154) 00:29:31.507 1188.989 - 1195.558: 0.6633% ( 246) 00:29:31.507 1195.558 - 1202.127: 1.0656% ( 376) 00:29:31.507 1202.127 - 1208.696: 1.6509% ( 547) 00:29:31.507 1208.696 - 1215.265: 2.4394% ( 737) 00:29:31.507 1215.265 - 1221.834: 3.4098% ( 907) 00:29:31.507 1221.834 - 1228.403: 4.6091% ( 1121) 00:29:31.507 1228.403 - 1234.972: 6.0535% ( 1350) 00:29:31.507 1234.972 - 1241.541: 7.8520% ( 1681) 00:29:31.507 1241.541 - 1248.110: 10.0891% ( 2091) 00:29:31.507 1248.110 - 1254.679: 12.6184% ( 2364) 00:29:31.507 1254.679 - 1261.248: 15.4654% ( 2661) 00:29:31.507 1261.248 - 1267.817: 18.6322% ( 2960) 00:29:31.507 1267.817 - 1274.386: 22.1201% ( 3260) 00:29:31.507 1274.386 - 1280.954: 25.7289% ( 3373) 00:29:31.507 1280.954 - 1287.523: 29.4970% ( 3522) 00:29:31.507 1287.523 - 1294.092: 33.3711% ( 3621) 00:29:31.507 1294.092 - 1300.661: 37.4100% ( 3775) 00:29:31.507 1300.661 - 1307.230: 41.4863% ( 3810) 00:29:31.507 1307.230 - 1313.799: 45.6129% ( 3857) 00:29:31.507 1313.799 - 1320.368: 49.6849% ( 3806) 00:29:31.507 1320.368 - 1326.937: 53.6328% ( 3690) 00:29:31.507 1326.937 - 1333.506: 57.4813% ( 3597) 00:29:31.507 1333.506 - 1340.075: 61.1874% ( 3464) 00:29:31.507 1340.075 - 1346.644: 64.7159% ( 3298) 00:29:31.507 1346.644 - 1353.213: 68.0454% ( 3112) 00:29:31.507 1353.213 - 1359.782: 71.0336% ( 2793) 00:29:31.507 1359.782 - 1366.351: 73.6335% ( 2430) 00:29:31.507 1366.351 - 1372.920: 75.9445% ( 2160) 00:29:31.507 1372.920 - 1379.489: 78.0211% ( 1941) 00:29:31.507 1379.489 - 1386.058: 79.8592% ( 1718) 00:29:31.507 1386.058 - 1392.627: 81.4491% ( 1486) 00:29:31.507 1392.627 - 1399.196: 82.8571% ( 1316) 00:29:31.507 1399.196 - 1405.765: 84.1174% ( 1178) 00:29:31.507 1405.765 - 1412.334: 85.2140% ( 1025) 00:29:31.507 1412.334 - 1418.903: 86.1727% ( 896) 00:29:31.507 1418.903 - 1425.472: 87.0029% ( 776) 00:29:31.507 1425.472 - 1432.041: 87.7144% ( 665) 00:29:31.507 1432.041 - 1438.610: 88.3788% ( 621) 00:29:31.507 1438.610 - 1445.179: 88.9769% ( 559) 00:29:31.507 1445.179 - 1451.748: 89.5428% ( 529) 00:29:31.507 1451.748 - 1458.317: 90.0542% ( 478) 00:29:31.507 1458.317 - 1464.886: 90.5357% ( 450) 00:29:31.507 1464.886 - 1471.455: 90.9701% ( 406) 00:29:31.507 1471.455 - 1478.024: 91.3938% ( 396) 00:29:31.507 1478.024 - 1484.593: 91.7853% ( 366) 00:29:31.507 1484.593 - 1491.162: 92.1598% ( 350) 00:29:31.507 1491.162 - 1497.731: 92.5182% ( 335) 00:29:31.507 1497.731 - 1504.300: 92.8627% ( 322) 00:29:31.507 1504.300 - 1510.869: 93.1698% ( 287) 00:29:31.507 1510.869 - 1517.438: 93.4501% ( 262) 00:29:31.507 1517.438 - 1524.007: 93.7251% ( 257) 00:29:31.507 1524.007 - 1530.576: 93.9583% ( 218) 00:29:31.507 1530.576 - 1537.145: 94.1744% ( 202) 00:29:31.507 1537.145 - 1543.714: 94.3723% ( 185) 00:29:31.507 1543.714 - 1550.283: 94.5532% ( 169) 00:29:31.507 1550.283 - 1556.852: 94.7126% ( 149) 00:29:31.507 1556.852 - 1563.421: 94.8634% ( 141) 00:29:31.507 1563.421 - 1569.990: 95.0207% ( 147) 00:29:31.507 1569.990 - 1576.559: 95.1790% ( 148) 00:29:31.507 1576.559 - 1583.128: 95.3470% ( 157) 00:29:31.507 1583.128 - 1589.697: 95.4968% ( 140) 00:29:31.507 1589.697 - 1596.266: 95.6327% ( 127) 00:29:31.507 1596.266 - 1602.835: 95.7696% ( 128) 00:29:31.507 1602.835 - 1609.404: 95.8959% ( 118) 00:29:31.507 1609.404 - 1615.973: 96.0275% ( 123) 00:29:31.507 1615.973 - 1622.542: 96.1452% ( 110) 00:29:31.507 1622.542 - 1629.111: 96.2629% ( 110) 00:29:31.507 1629.111 - 1635.680: 96.3773% ( 107) 00:29:31.507 1635.680 - 1642.249: 96.4865% ( 102) 00:29:31.507 1642.249 - 1648.818: 96.5849% ( 92) 00:29:31.507 1648.818 - 1655.387: 96.6769% ( 86) 00:29:31.507 1655.387 - 1661.956: 96.7582% ( 76) 00:29:31.507 1661.956 - 1668.525: 96.8320% ( 69) 00:29:31.507 1668.525 - 1675.094: 96.8898% ( 54) 00:29:31.507 1675.094 - 1681.663: 96.9454% ( 52) 00:29:31.507 1681.663 - 1694.801: 97.0332% ( 82) 00:29:31.507 1694.801 - 1707.939: 97.1113% ( 73) 00:29:31.507 1707.939 - 1721.077: 97.1851% ( 69) 00:29:31.507 1721.077 - 1734.215: 97.2557% ( 66) 00:29:31.507 1734.215 - 1747.353: 97.3274% ( 67) 00:29:31.507 1747.353 - 1760.491: 97.3713% ( 41) 00:29:31.507 1760.491 - 1773.629: 97.3927% ( 20) 00:29:31.507 1773.629 - 1786.767: 97.4066% ( 13) 00:29:31.507 1786.767 - 1799.905: 97.4141% ( 7) 00:29:31.507 1799.905 - 1813.043: 97.4258% ( 11) 00:29:31.507 1813.043 - 1826.181: 97.4376% ( 11) 00:29:31.508 1826.181 - 1839.319: 97.4558% ( 17) 00:29:31.508 1839.319 - 1852.457: 97.4836% ( 26) 00:29:31.508 1852.457 - 1865.595: 97.5189% ( 33) 00:29:31.508 1865.595 - 1878.733: 97.5531% ( 32) 00:29:31.508 1878.733 - 1891.871: 97.5927% ( 37) 00:29:31.508 1891.871 - 1905.009: 97.6355% ( 40) 00:29:31.508 1905.009 - 1918.147: 97.6880% ( 49) 00:29:31.508 1918.147 - 1931.285: 97.7425% ( 51) 00:29:31.508 1931.285 - 1944.423: 97.7907% ( 45) 00:29:31.508 1944.423 - 1957.561: 97.8377% ( 44) 00:29:31.508 1957.561 - 1970.699: 97.8944% ( 53) 00:29:31.508 1970.699 - 1983.837: 97.9522% ( 54) 00:29:31.508 1983.837 - 1996.975: 98.0121% ( 56) 00:29:31.508 1996.975 - 2010.113: 98.0678% ( 52) 00:29:31.508 2010.113 - 2023.251: 98.1223% ( 51) 00:29:31.508 2023.251 - 2036.389: 98.1737% ( 48) 00:29:31.508 2036.389 - 2049.527: 98.2154% ( 39) 00:29:31.508 2049.527 - 2062.665: 98.2529% ( 35) 00:29:31.508 2062.665 - 2075.803: 98.2839% ( 29) 00:29:31.508 2075.803 - 2088.941: 98.3117% ( 26) 00:29:31.508 2088.941 - 2102.079: 98.3374% ( 24) 00:29:31.508 2102.079 - 2115.217: 98.3545% ( 16) 00:29:31.508 2115.217 - 2128.355: 98.3673% ( 12) 00:29:31.508 2128.355 - 2141.493: 98.3759% ( 8) 00:29:31.508 2141.493 - 2154.631: 98.3812% ( 5) 00:29:31.508 2154.631 - 2167.769: 98.3866% ( 5) 00:29:31.508 2167.769 - 2180.907: 98.3909% ( 4) 00:29:31.508 2180.907 - 2194.045: 98.3952% ( 4) 00:29:31.508 2194.045 - 2207.183: 98.4016% ( 6) 00:29:31.508 2207.183 - 2220.321: 98.4080% ( 6) 00:29:31.508 2220.321 - 2233.459: 98.4123% ( 4) 00:29:31.508 2233.459 - 2246.597: 98.4176% ( 5) 00:29:31.508 2246.597 - 2259.735: 98.4240% ( 6) 00:29:31.508 2259.735 - 2272.873: 98.4315% ( 7) 00:29:31.508 2272.873 - 2286.011: 98.4444% ( 12) 00:29:31.508 2286.011 - 2299.149: 98.4604% ( 15) 00:29:31.508 2299.149 - 2312.287: 98.4765% ( 15) 00:29:31.508 2312.287 - 2325.425: 98.4936% ( 16) 00:29:31.508 2325.425 - 2338.563: 98.5096% ( 15) 00:29:31.508 2338.563 - 2351.701: 98.5257% ( 15) 00:29:31.508 2351.701 - 2364.839: 98.5439% ( 17) 00:29:31.508 2364.839 - 2377.977: 98.5599% ( 15) 00:29:31.508 2377.977 - 2391.115: 98.5760% ( 15) 00:29:31.508 2391.115 - 2404.253: 98.5995% ( 22) 00:29:31.508 2404.253 - 2417.391: 98.6252% ( 24) 00:29:31.508 2417.391 - 2430.529: 98.6573% ( 30) 00:29:31.508 2430.529 - 2443.667: 98.6904% ( 31) 00:29:31.508 2443.667 - 2456.805: 98.7236% ( 31) 00:29:31.508 2456.805 - 2469.943: 98.7546% ( 29) 00:29:31.508 2469.943 - 2483.081: 98.7889% ( 32) 00:29:31.508 2483.081 - 2496.219: 98.8199% ( 29) 00:29:31.508 2496.219 - 2509.357: 98.8488% ( 27) 00:29:31.508 2509.357 - 2522.495: 98.8852% ( 34) 00:29:31.508 2522.495 - 2535.633: 98.9119% ( 25) 00:29:31.508 2535.633 - 2548.771: 98.9333% ( 20) 00:29:31.508 2548.771 - 2561.909: 98.9504% ( 16) 00:29:31.508 2561.909 - 2575.047: 98.9654% ( 14) 00:29:31.508 2575.047 - 2588.185: 98.9793% ( 13) 00:29:31.508 2588.185 - 2601.323: 98.9943% ( 14) 00:29:31.508 2601.323 - 2614.461: 99.0093% ( 14) 00:29:31.508 2614.461 - 2627.599: 99.0243% ( 14) 00:29:31.508 2627.599 - 2640.737: 99.0382% ( 13) 00:29:31.508 2640.737 - 2653.875: 99.0531% ( 14) 00:29:31.508 2653.875 - 2667.013: 99.0649% ( 11) 00:29:31.508 2667.013 - 2680.151: 99.0745% ( 9) 00:29:31.508 2680.151 - 2693.289: 99.0820% ( 7) 00:29:31.508 2706.427 - 2719.565: 99.0842% ( 2) 00:29:31.508 2719.565 - 2732.703: 99.1002% ( 15) 00:29:31.508 2732.703 - 2745.841: 99.1141% ( 13) 00:29:31.508 2745.841 - 2758.979: 99.1302% ( 15) 00:29:31.508 2758.979 - 2772.117: 99.1452% ( 14) 00:29:31.508 2772.117 - 2785.255: 99.1623% ( 16) 00:29:31.508 2785.255 - 2798.393: 99.1762% ( 13) 00:29:31.508 2798.393 - 2811.531: 99.1922% ( 15) 00:29:31.508 2811.531 - 2824.669: 99.2072% ( 14) 00:29:31.508 2824.669 - 2837.807: 99.2222% ( 14) 00:29:31.508 2837.807 - 2850.945: 99.2372% ( 14) 00:29:31.508 2850.945 - 2864.083: 99.2500% ( 12) 00:29:31.508 2864.083 - 2877.221: 99.2607% ( 10) 00:29:31.508 2877.221 - 2890.359: 99.2735% ( 12) 00:29:31.508 2890.359 - 2903.497: 99.2874% ( 13) 00:29:31.508 2903.497 - 2916.635: 99.3024% ( 14) 00:29:31.508 2916.635 - 2929.773: 99.3153% ( 12) 00:29:31.508 2929.773 - 2942.911: 99.3292% ( 13) 00:29:31.508 2942.911 - 2956.049: 99.3431% ( 13) 00:29:31.508 2956.049 - 2969.187: 99.3570% ( 13) 00:29:31.508 2969.187 - 2982.325: 99.3709% ( 13) 00:29:31.508 2982.325 - 2995.463: 99.3848% ( 13) 00:29:31.508 2995.463 - 3008.601: 99.3934% ( 8) 00:29:31.508 3008.601 - 3021.739: 99.4009% ( 7) 00:29:31.508 3021.739 - 3034.877: 99.4083% ( 7) 00:29:31.508 3034.877 - 3048.015: 99.4116% ( 3) 00:29:31.508 3126.843 - 3139.981: 99.4126% ( 1) 00:29:31.508 3205.671 - 3218.809: 99.4180% ( 5) 00:29:31.508 3218.809 - 3231.947: 99.4297% ( 11) 00:29:31.508 3231.947 - 3245.085: 99.4447% ( 14) 00:29:31.508 3245.085 - 3258.223: 99.4576% ( 12) 00:29:31.508 3258.223 - 3271.361: 99.4736% ( 15) 00:29:31.508 3271.361 - 3284.499: 99.4854% ( 11) 00:29:31.508 3284.499 - 3297.637: 99.4929% ( 7) 00:29:31.508 3297.637 - 3310.775: 99.5004% ( 7) 00:29:31.508 3310.775 - 3323.913: 99.5078% ( 7) 00:29:31.508 3323.913 - 3337.051: 99.5153% ( 7) 00:29:31.508 3337.051 - 3350.189: 99.5228% ( 7) 00:29:31.508 3350.189 - 3363.327: 99.5303% ( 7) 00:29:31.508 3363.327 - 3389.603: 99.5453% ( 14) 00:29:31.508 3389.603 - 3415.879: 99.5496% ( 4) 00:29:31.508 3757.467 - 3783.743: 99.5506% ( 1) 00:29:31.508 3783.743 - 3810.019: 99.5539% ( 3) 00:29:31.508 3941.398 - 3967.674: 99.5571% ( 3) 00:29:31.508 3967.674 - 3993.950: 99.5742% ( 16) 00:29:31.508 3993.950 - 4020.226: 99.5913% ( 16) 00:29:31.508 4020.226 - 4046.502: 99.6073% ( 15) 00:29:31.508 4046.502 - 4072.778: 99.6245% ( 16) 00:29:31.508 4072.778 - 4099.054: 99.6416% ( 16) 00:29:31.508 4099.054 - 4125.330: 99.6587% ( 16) 00:29:31.508 4125.330 - 4151.606: 99.6758% ( 16) 00:29:31.508 4151.606 - 4177.882: 99.6865% ( 10) 00:29:31.508 4335.538 - 4361.814: 99.6983% ( 11) 00:29:31.508 4361.814 - 4388.090: 99.7122% ( 13) 00:29:31.508 4388.090 - 4414.366: 99.7175% ( 5) 00:29:31.508 4519.470 - 4545.746: 99.7272% ( 9) 00:29:31.508 4545.746 - 4572.022: 99.7432% ( 15) 00:29:31.508 4572.022 - 4598.298: 99.7582% ( 14) 00:29:31.508 4598.298 - 4624.574: 99.7743% ( 15) 00:29:31.508 4624.574 - 4650.850: 99.7903% ( 15) 00:29:31.508 4650.850 - 4677.126: 99.8053% ( 14) 00:29:31.508 4677.126 - 4703.402: 99.8213% ( 15) 00:29:31.508 4703.402 - 4729.678: 99.8363% ( 14) 00:29:31.508 4729.678 - 4755.954: 99.8459% ( 9) 00:29:31.508 4834.782 - 4861.058: 99.8502% ( 4) 00:29:31.508 4861.058 - 4887.334: 99.8545% ( 4) 00:29:31.508 4887.334 - 4913.610: 99.8566% ( 2) 00:29:31.508 4913.610 - 4939.886: 99.8620% ( 5) 00:29:31.508 5150.094 - 5176.370: 99.8641% ( 2) 00:29:31.508 5176.370 - 5202.646: 99.8684% ( 4) 00:29:31.508 5202.646 - 5228.922: 99.8727% ( 4) 00:29:31.508 5228.922 - 5255.198: 99.8780% ( 5) 00:29:31.508 5255.198 - 5281.474: 99.8823% ( 4) 00:29:31.508 5281.474 - 5307.750: 99.8973% ( 14) 00:29:31.508 5307.750 - 5334.026: 99.9016% ( 4) 00:29:31.508 5334.026 - 5360.302: 99.9058% ( 4) 00:29:31.508 5360.302 - 5386.578: 99.9101% ( 4) 00:29:31.508 5386.578 - 5412.854: 99.9144% ( 4) 00:29:31.508 5412.854 - 5439.130: 99.9176% ( 3) 00:29:31.508 5439.130 - 5465.406: 99.9230% ( 5) 00:29:31.508 5465.406 - 5491.682: 99.9272% ( 4) 00:29:31.508 5491.682 - 5517.958: 99.9326% ( 5) 00:29:31.508 5517.958 - 5544.234: 99.9369% ( 4) 00:29:31.508 5544.234 - 5570.510: 99.9412% ( 4) 00:29:31.508 5570.510 - 5596.786: 99.9454% ( 4) 00:29:31.508 5596.786 - 5623.062: 99.9508% ( 5) 00:29:31.508 5623.062 - 5649.338: 99.9561% ( 5) 00:29:31.508 5649.338 - 5675.614: 99.9604% ( 4) 00:29:31.508 5675.614 - 5701.890: 99.9658% ( 5) 00:29:31.508 5701.890 - 5728.166: 99.9690% ( 3) 00:29:31.508 5728.166 - 5754.442: 99.9743% ( 5) 00:29:31.508 5754.442 - 5780.718: 99.9786% ( 4) 00:29:31.508 5780.718 - 5806.994: 99.9840% ( 5) 00:29:31.508 5806.994 - 5833.270: 99.9882% ( 4) 00:29:31.508 5833.270 - 5859.546: 99.9925% ( 4) 00:29:31.508 5859.546 - 5885.822: 99.9968% ( 4) 00:29:31.508 5885.822 - 5912.098: 100.0000% ( 3) 00:29:31.508 00:29:31.508 09:55:59 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:29:32.442 EAL: TSC is not safe to use in SMP mode 00:29:32.442 EAL: TSC is not invariant 00:29:32.442 [2024-07-15 09:56:00.329554] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:33.378 Initializing NVMe Controllers 00:29:33.378 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:33.378 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:29:33.378 Initialization complete. Launching workers. 00:29:33.378 ======================================================== 00:29:33.378 Latency(us) 00:29:33.378 Device Information : IOPS MiB/s Average min max 00:29:33.378 PCIE (0000:00:10.0) NSID 1 from core 0: 68451.65 802.17 1869.93 148.05 10256.14 00:29:33.378 ======================================================== 00:29:33.378 Total : 68451.65 802.17 1869.93 148.05 10256.14 00:29:33.378 00:29:33.378 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:33.378 ================================================================================= 00:29:33.378 1.00000% : 1051.040us 00:29:33.378 10.00000% : 1353.213us 00:29:33.378 25.00000% : 1596.266us 00:29:33.378 50.00000% : 1839.319us 00:29:33.378 75.00000% : 2036.389us 00:29:33.378 90.00000% : 2351.701us 00:29:33.378 95.00000% : 2758.979us 00:29:33.378 98.00000% : 3271.361us 00:29:33.378 99.00000% : 3494.707us 00:29:33.378 99.50000% : 3915.122us 00:29:33.378 99.90000% : 5202.646us 00:29:33.378 99.99000% : 9196.596us 00:29:33.378 99.99900% : 10300.188us 00:29:33.378 99.99990% : 10300.188us 00:29:33.378 99.99999% : 10300.188us 00:29:33.378 00:29:33.378 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:33.378 ============================================================================== 00:29:33.378 Range in us Cumulative IO count 00:29:33.378 147.802 - 148.624: 0.0015% ( 1) 00:29:33.378 165.867 - 166.688: 0.0029% ( 1) 00:29:33.378 177.363 - 178.184: 0.0044% ( 1) 00:29:33.378 210.208 - 211.850: 0.0058% ( 1) 00:29:33.378 211.850 - 213.492: 0.0073% ( 1) 00:29:33.378 259.475 - 261.118: 0.0088% ( 1) 00:29:33.378 261.118 - 262.760: 0.0102% ( 1) 00:29:33.378 297.247 - 298.889: 0.0131% ( 2) 00:29:33.378 376.075 - 377.717: 0.0146% ( 1) 00:29:33.378 379.360 - 381.002: 0.0190% ( 3) 00:29:33.378 381.002 - 382.644: 0.0219% ( 2) 00:29:33.378 382.644 - 384.286: 0.0248% ( 2) 00:29:33.378 384.286 - 385.929: 0.0277% ( 2) 00:29:33.378 387.571 - 389.213: 0.0292% ( 1) 00:29:33.378 389.213 - 390.855: 0.0307% ( 1) 00:29:33.378 390.855 - 392.498: 0.0336% ( 2) 00:29:33.378 392.498 - 394.140: 0.0380% ( 3) 00:29:33.378 394.140 - 395.782: 0.0394% ( 1) 00:29:33.378 395.782 - 397.424: 0.0423% ( 2) 00:29:33.378 399.067 - 400.709: 0.0438% ( 1) 00:29:33.378 400.709 - 402.351: 0.0453% ( 1) 00:29:33.378 423.700 - 426.985: 0.0496% ( 3) 00:29:33.378 430.269 - 433.554: 0.0511% ( 1) 00:29:33.378 476.252 - 479.537: 0.0526% ( 1) 00:29:33.378 479.537 - 482.821: 0.0540% ( 1) 00:29:33.378 482.821 - 486.106: 0.0555% ( 1) 00:29:33.378 518.951 - 522.235: 0.0569% ( 1) 00:29:33.378 683.176 - 686.460: 0.0584% ( 1) 00:29:33.378 686.460 - 689.745: 0.0657% ( 5) 00:29:33.378 689.745 - 693.029: 0.0730% ( 5) 00:29:33.378 693.029 - 696.314: 0.0745% ( 1) 00:29:33.378 696.314 - 699.598: 0.0788% ( 3) 00:29:33.378 699.598 - 702.883: 0.0890% ( 7) 00:29:33.378 702.883 - 706.167: 0.0920% ( 2) 00:29:33.378 706.167 - 709.452: 0.0934% ( 1) 00:29:33.378 709.452 - 712.736: 0.0963% ( 2) 00:29:33.378 712.736 - 716.021: 0.0993% ( 2) 00:29:33.378 716.021 - 719.305: 0.1036% ( 3) 00:29:33.378 719.305 - 722.590: 0.1109% ( 5) 00:29:33.378 722.590 - 725.874: 0.1153% ( 3) 00:29:33.378 725.874 - 729.159: 0.1212% ( 4) 00:29:33.378 729.159 - 732.443: 0.1285% ( 5) 00:29:33.378 732.443 - 735.728: 0.1358% ( 5) 00:29:33.378 735.728 - 739.012: 0.1416% ( 4) 00:29:33.378 739.012 - 742.297: 0.1445% ( 2) 00:29:33.378 742.297 - 745.581: 0.1474% ( 2) 00:29:33.378 745.581 - 748.866: 0.1489% ( 1) 00:29:33.378 748.866 - 752.150: 0.1504% ( 1) 00:29:33.378 752.150 - 755.435: 0.1518% ( 1) 00:29:33.378 755.435 - 758.719: 0.1533% ( 1) 00:29:33.378 765.288 - 768.573: 0.1547% ( 1) 00:29:33.378 768.573 - 771.857: 0.1606% ( 4) 00:29:33.378 771.857 - 775.142: 0.1650% ( 3) 00:29:33.378 778.426 - 781.711: 0.1664% ( 1) 00:29:33.378 781.711 - 784.995: 0.1693% ( 2) 00:29:33.378 788.280 - 791.564: 0.1708% ( 1) 00:29:33.378 791.564 - 794.849: 0.1781% ( 5) 00:29:33.378 794.849 - 798.133: 0.1796% ( 1) 00:29:33.378 811.271 - 814.556: 0.1810% ( 1) 00:29:33.378 814.556 - 817.840: 0.1854% ( 3) 00:29:33.378 817.840 - 821.125: 0.1883% ( 2) 00:29:33.378 821.125 - 824.409: 0.1912% ( 2) 00:29:33.378 824.409 - 827.694: 0.1956% ( 3) 00:29:33.378 827.694 - 830.978: 0.1971% ( 1) 00:29:33.378 830.978 - 834.263: 0.2000% ( 2) 00:29:33.378 834.263 - 837.547: 0.2015% ( 1) 00:29:33.378 837.547 - 840.832: 0.2029% ( 1) 00:29:33.378 840.832 - 847.401: 0.2088% ( 4) 00:29:33.378 847.401 - 853.970: 0.2190% ( 7) 00:29:33.378 853.970 - 860.539: 0.2219% ( 2) 00:29:33.378 860.539 - 867.108: 0.2248% ( 2) 00:29:33.378 867.108 - 873.677: 0.2467% ( 15) 00:29:33.378 873.677 - 880.246: 0.2628% ( 11) 00:29:33.378 880.246 - 886.815: 0.2701% ( 5) 00:29:33.378 886.815 - 893.384: 0.2861% ( 11) 00:29:33.378 893.384 - 899.953: 0.2905% ( 3) 00:29:33.378 899.953 - 906.522: 0.2963% ( 4) 00:29:33.378 906.522 - 913.091: 0.3080% ( 8) 00:29:33.378 913.091 - 919.660: 0.3182% ( 7) 00:29:33.378 919.660 - 926.229: 0.3591% ( 28) 00:29:33.378 926.229 - 932.798: 0.3737% ( 10) 00:29:33.378 932.798 - 939.367: 0.3927% ( 13) 00:29:33.378 939.367 - 945.936: 0.4044% ( 8) 00:29:33.378 945.936 - 952.505: 0.4102% ( 4) 00:29:33.378 952.505 - 959.074: 0.4190% ( 6) 00:29:33.378 959.074 - 965.643: 0.4350% ( 11) 00:29:33.378 965.643 - 972.212: 0.4715% ( 25) 00:29:33.378 972.212 - 978.781: 0.4934% ( 15) 00:29:33.378 978.781 - 985.350: 0.5299% ( 25) 00:29:33.378 985.350 - 991.919: 0.5606% ( 21) 00:29:33.378 991.919 - 998.488: 0.6131% ( 36) 00:29:33.378 998.488 - 1005.057: 0.6774% ( 44) 00:29:33.378 1005.057 - 1011.626: 0.7372% ( 41) 00:29:33.378 1011.626 - 1018.195: 0.7912% ( 37) 00:29:33.378 1018.195 - 1024.764: 0.8394% ( 33) 00:29:33.378 1024.764 - 1031.333: 0.8744% ( 24) 00:29:33.378 1031.333 - 1037.902: 0.9285% ( 37) 00:29:33.378 1037.902 - 1044.471: 0.9781% ( 34) 00:29:33.378 1044.471 - 1051.040: 1.0394% ( 42) 00:29:33.378 1051.040 - 1057.609: 1.1197% ( 55) 00:29:33.378 1057.609 - 1064.178: 1.1956% ( 52) 00:29:33.378 1064.178 - 1070.747: 1.2744% ( 54) 00:29:33.378 1070.747 - 1077.316: 1.3591% ( 58) 00:29:33.378 1077.316 - 1083.885: 1.4949% ( 93) 00:29:33.378 1083.885 - 1090.454: 1.6788% ( 126) 00:29:33.378 1090.454 - 1097.023: 1.8233% ( 99) 00:29:33.378 1097.023 - 1103.592: 1.9839% ( 110) 00:29:33.378 1103.592 - 1110.161: 2.1051% ( 83) 00:29:33.378 1110.161 - 1116.730: 2.2759% ( 117) 00:29:33.378 1116.730 - 1123.299: 2.3927% ( 80) 00:29:33.378 1123.299 - 1129.868: 2.4832% ( 62) 00:29:33.378 1129.868 - 1136.437: 2.6160% ( 91) 00:29:33.378 1136.437 - 1143.006: 2.7708% ( 106) 00:29:33.378 1143.006 - 1149.575: 2.9445% ( 119) 00:29:33.378 1149.575 - 1156.144: 3.1197% ( 120) 00:29:33.378 1156.144 - 1162.713: 3.2598% ( 96) 00:29:33.378 1162.713 - 1169.282: 3.4189% ( 109) 00:29:33.378 1169.282 - 1175.851: 3.5386% ( 82) 00:29:33.378 1175.851 - 1182.420: 3.7080% ( 116) 00:29:33.378 1182.420 - 1188.989: 3.8788% ( 117) 00:29:33.378 1188.989 - 1195.558: 4.0773% ( 136) 00:29:33.378 1195.558 - 1202.127: 4.2233% ( 100) 00:29:33.378 1202.127 - 1208.696: 4.3999% ( 121) 00:29:33.378 1208.696 - 1215.265: 4.6248% ( 154) 00:29:33.378 1215.265 - 1221.834: 4.8627% ( 163) 00:29:33.378 1221.834 - 1228.403: 5.0977% ( 161) 00:29:33.378 1228.403 - 1234.972: 5.3138% ( 148) 00:29:33.378 1234.972 - 1241.541: 5.4773% ( 112) 00:29:33.378 1241.541 - 1248.110: 5.7255% ( 170) 00:29:33.378 1248.110 - 1254.679: 5.9079% ( 125) 00:29:33.378 1254.679 - 1261.248: 6.1590% ( 172) 00:29:33.378 1261.248 - 1267.817: 6.4262% ( 183) 00:29:33.379 1267.817 - 1274.386: 6.7342% ( 211) 00:29:33.379 1274.386 - 1280.954: 7.1225% ( 266) 00:29:33.379 1280.954 - 1287.523: 7.3955% ( 187) 00:29:33.379 1287.523 - 1294.092: 7.7108% ( 216) 00:29:33.379 1294.092 - 1300.661: 7.9780% ( 183) 00:29:33.379 1300.661 - 1307.230: 8.2743% ( 203) 00:29:33.379 1307.230 - 1313.799: 8.5780% ( 208) 00:29:33.379 1313.799 - 1320.368: 8.8086% ( 158) 00:29:33.379 1320.368 - 1326.937: 9.0626% ( 174) 00:29:33.379 1326.937 - 1333.506: 9.3385% ( 189) 00:29:33.379 1333.506 - 1340.075: 9.6130% ( 188) 00:29:33.379 1340.075 - 1346.644: 9.7969% ( 126) 00:29:33.945 1346.644 - 1353.213: 10.0072% ( 144) 00:29:33.945 1353.213 - 1359.782: 10.3458% ( 232) 00:29:33.945 1359.782 - 1366.351: 10.6393% ( 201) 00:29:33.945 1366.351 - 1372.920: 11.0057% ( 251) 00:29:33.945 1372.920 - 1379.489: 11.2889% ( 194) 00:29:33.945 1379.489 - 1386.058: 11.5619% ( 187) 00:29:33.945 1386.058 - 1392.627: 11.8290% ( 183) 00:29:33.945 1392.627 - 1399.196: 12.1808% ( 241) 00:29:33.945 1399.196 - 1405.765: 12.5108% ( 226) 00:29:33.945 1405.765 - 1412.334: 12.8027% ( 200) 00:29:33.945 1412.334 - 1418.903: 13.0830% ( 192) 00:29:33.945 1418.903 - 1425.472: 13.4509% ( 252) 00:29:33.945 1425.472 - 1432.041: 13.8670% ( 285) 00:29:33.945 1432.041 - 1438.610: 14.1998% ( 228) 00:29:33.945 1438.610 - 1445.179: 14.5034% ( 208) 00:29:33.945 1445.179 - 1451.748: 14.8377% ( 229) 00:29:33.945 1451.748 - 1458.317: 15.1136% ( 189) 00:29:33.945 1458.317 - 1464.886: 15.4348% ( 220) 00:29:33.945 1464.886 - 1471.455: 15.8479% ( 283) 00:29:33.945 1471.455 - 1478.024: 16.4406% ( 406) 00:29:33.945 1478.024 - 1484.593: 16.8129% ( 255) 00:29:33.945 1484.593 - 1491.162: 17.2362% ( 290) 00:29:33.945 1491.162 - 1497.731: 17.6654% ( 294) 00:29:33.945 1497.731 - 1504.300: 18.1749% ( 349) 00:29:33.946 1504.300 - 1510.869: 18.7063% ( 364) 00:29:33.946 1510.869 - 1517.438: 19.1603% ( 311) 00:29:33.946 1517.438 - 1524.007: 19.5983% ( 300) 00:29:33.946 1524.007 - 1530.576: 20.0333% ( 298) 00:29:33.946 1530.576 - 1537.145: 20.5311% ( 341) 00:29:33.946 1537.145 - 1543.714: 21.0012% ( 322) 00:29:33.946 1543.714 - 1550.283: 21.4698% ( 321) 00:29:33.946 1550.283 - 1556.852: 21.9427% ( 324) 00:29:33.946 1556.852 - 1563.421: 22.4697% ( 361) 00:29:33.946 1563.421 - 1569.990: 23.0099% ( 370) 00:29:33.946 1569.990 - 1576.559: 23.5748% ( 387) 00:29:33.946 1576.559 - 1583.128: 24.0610% ( 333) 00:29:33.946 1583.128 - 1589.697: 24.5310% ( 322) 00:29:33.946 1589.697 - 1596.266: 25.0230% ( 337) 00:29:33.946 1596.266 - 1602.835: 25.4741% ( 309) 00:29:33.946 1602.835 - 1609.404: 26.0274% ( 379) 00:29:33.946 1609.404 - 1615.973: 26.5003% ( 324) 00:29:33.946 1615.973 - 1622.542: 27.1062% ( 415) 00:29:33.946 1622.542 - 1629.111: 27.7062% ( 411) 00:29:33.946 1629.111 - 1635.680: 28.3018% ( 408) 00:29:33.946 1635.680 - 1642.249: 28.9499% ( 444) 00:29:33.946 1642.249 - 1648.818: 29.5777% ( 430) 00:29:33.946 1648.818 - 1655.387: 30.1733% ( 408) 00:29:33.946 1655.387 - 1661.956: 30.6974% ( 359) 00:29:33.946 1661.956 - 1668.525: 31.2404% ( 372) 00:29:33.946 1668.525 - 1675.094: 31.8390% ( 410) 00:29:33.946 1675.094 - 1681.663: 32.3426% ( 345) 00:29:33.946 1681.663 - 1694.801: 33.4886% ( 785) 00:29:33.946 1694.801 - 1707.939: 34.7440% ( 860) 00:29:33.946 1707.939 - 1721.077: 36.0389% ( 887) 00:29:33.946 1721.077 - 1734.215: 37.4301% ( 953) 00:29:33.946 1734.215 - 1747.353: 38.9586% ( 1047) 00:29:33.946 1747.353 - 1760.491: 40.4797% ( 1042) 00:29:33.946 1760.491 - 1773.629: 41.9629% ( 1016) 00:29:33.946 1773.629 - 1786.767: 43.4855% ( 1043) 00:29:33.946 1786.767 - 1799.905: 45.2066% ( 1179) 00:29:33.946 1799.905 - 1813.043: 46.8606% ( 1133) 00:29:33.946 1813.043 - 1826.181: 48.5146% ( 1133) 00:29:33.946 1826.181 - 1839.319: 50.3015% ( 1224) 00:29:33.946 1839.319 - 1852.457: 52.0635% ( 1207) 00:29:33.946 1852.457 - 1865.595: 53.7875% ( 1181) 00:29:33.946 1865.595 - 1878.733: 55.6912% ( 1304) 00:29:33.946 1878.733 - 1891.871: 57.4999% ( 1239) 00:29:33.946 1891.871 - 1905.009: 59.3159% ( 1244) 00:29:33.946 1905.009 - 1918.147: 61.1976% ( 1289) 00:29:33.946 1918.147 - 1931.285: 63.0910% ( 1297) 00:29:33.946 1931.285 - 1944.423: 64.8939% ( 1235) 00:29:33.946 1944.423 - 1957.561: 66.6779% ( 1222) 00:29:33.946 1957.561 - 1970.699: 68.3610% ( 1153) 00:29:33.946 1970.699 - 1983.837: 70.0632% ( 1166) 00:29:33.946 1983.837 - 1996.975: 71.7333% ( 1144) 00:29:33.946 1996.975 - 2010.113: 73.2383% ( 1031) 00:29:33.946 2010.113 - 2023.251: 74.6923% ( 996) 00:29:33.946 2023.251 - 2036.389: 76.1843% ( 1022) 00:29:33.946 2036.389 - 2049.527: 77.4251% ( 850) 00:29:33.946 2049.527 - 2062.665: 78.5069% ( 741) 00:29:33.946 2062.665 - 2075.803: 79.4558% ( 650) 00:29:33.946 2075.803 - 2088.941: 80.4047% ( 650) 00:29:33.946 2088.941 - 2102.079: 81.3506% ( 648) 00:29:33.946 2102.079 - 2115.217: 82.1200% ( 527) 00:29:33.946 2115.217 - 2128.355: 82.9433% ( 564) 00:29:33.946 2128.355 - 2141.493: 83.7126% ( 527) 00:29:33.946 2141.493 - 2154.631: 84.3754% ( 454) 00:29:33.946 2154.631 - 2167.769: 84.9856% ( 418) 00:29:33.946 2167.769 - 2180.907: 85.5345% ( 376) 00:29:33.946 2180.907 - 2194.045: 86.0352% ( 343) 00:29:33.946 2194.045 - 2207.183: 86.5287% ( 338) 00:29:33.946 2207.183 - 2220.321: 86.9900% ( 316) 00:29:33.946 2220.321 - 2233.459: 87.4717% ( 330) 00:29:33.946 2233.459 - 2246.597: 87.8469% ( 257) 00:29:33.946 2246.597 - 2259.735: 88.2936% ( 306) 00:29:33.946 2259.735 - 2272.873: 88.6031% ( 212) 00:29:33.946 2272.873 - 2286.011: 88.8556% ( 173) 00:29:33.946 2286.011 - 2299.149: 89.1126% ( 176) 00:29:33.946 2299.149 - 2312.287: 89.3783% ( 182) 00:29:33.946 2312.287 - 2325.425: 89.6206% ( 166) 00:29:33.946 2325.425 - 2338.563: 89.8308% ( 144) 00:29:33.946 2338.563 - 2351.701: 90.0717% ( 165) 00:29:33.946 2351.701 - 2364.839: 90.3271% ( 175) 00:29:33.946 2364.839 - 2377.977: 90.5490% ( 152) 00:29:33.946 2377.977 - 2391.115: 90.7432% ( 133) 00:29:33.946 2391.115 - 2404.253: 90.9578% ( 147) 00:29:33.946 2404.253 - 2417.391: 91.1578% ( 137) 00:29:33.946 2417.391 - 2430.529: 91.3549% ( 135) 00:29:33.946 2430.529 - 2443.667: 91.5403% ( 127) 00:29:33.946 2443.667 - 2456.805: 91.6790% ( 95) 00:29:33.946 2456.805 - 2469.943: 91.7972% ( 81) 00:29:33.946 2469.943 - 2483.081: 91.9052% ( 74) 00:29:33.946 2483.081 - 2496.219: 92.0176% ( 77) 00:29:33.946 2496.219 - 2509.357: 92.1490% ( 90) 00:29:33.946 2509.357 - 2522.495: 92.2994% ( 103) 00:29:33.946 2522.495 - 2535.633: 92.4249% ( 86) 00:29:33.946 2535.633 - 2548.771: 92.5403% ( 79) 00:29:33.946 2548.771 - 2561.909: 92.7081% ( 115) 00:29:33.946 2561.909 - 2575.047: 92.9096% ( 138) 00:29:33.946 2575.047 - 2588.185: 93.1373% ( 156) 00:29:33.946 2588.185 - 2601.323: 93.3373% ( 137) 00:29:33.946 2601.323 - 2614.461: 93.4570% ( 82) 00:29:33.946 2614.461 - 2627.599: 93.5621% ( 72) 00:29:33.946 2627.599 - 2640.737: 93.6775% ( 79) 00:29:33.946 2640.737 - 2653.875: 93.7884% ( 76) 00:29:33.946 2653.875 - 2667.013: 93.9271% ( 95) 00:29:33.946 2667.013 - 2680.151: 94.0643% ( 94) 00:29:33.946 2680.151 - 2693.289: 94.2453% ( 124) 00:29:33.946 2693.289 - 2706.427: 94.4322% ( 128) 00:29:33.946 2706.427 - 2719.565: 94.6132% ( 124) 00:29:33.946 2719.565 - 2732.703: 94.7913% ( 122) 00:29:33.946 2732.703 - 2745.841: 94.9227% ( 90) 00:29:33.946 2745.841 - 2758.979: 95.0380% ( 79) 00:29:33.946 2758.979 - 2772.117: 95.1388% ( 69) 00:29:33.946 2772.117 - 2785.255: 95.2424% ( 71) 00:29:33.946 2785.255 - 2798.393: 95.3621% ( 82) 00:29:33.946 2798.393 - 2811.531: 95.4628% ( 69) 00:29:33.946 2811.531 - 2824.669: 95.5694% ( 73) 00:29:33.946 2824.669 - 2837.807: 95.6541% ( 58) 00:29:33.946 2837.807 - 2850.945: 95.7300% ( 52) 00:29:33.946 2850.945 - 2864.083: 95.8161% ( 59) 00:29:33.946 2864.083 - 2877.221: 95.8760% ( 41) 00:29:33.946 2877.221 - 2890.359: 95.9446% ( 47) 00:29:33.946 2890.359 - 2903.497: 96.0059% ( 42) 00:29:33.946 2903.497 - 2916.635: 96.0789% ( 50) 00:29:33.946 2916.635 - 2929.773: 96.1694% ( 62) 00:29:33.946 2929.773 - 2942.911: 96.2511% ( 56) 00:29:33.946 2942.911 - 2956.049: 96.3241% ( 50) 00:29:33.946 2956.049 - 2969.187: 96.4030% ( 54) 00:29:33.946 2969.187 - 2982.325: 96.4906% ( 60) 00:29:33.946 2982.325 - 2995.463: 96.5709% ( 55) 00:29:33.946 2995.463 - 3008.601: 96.6482% ( 53) 00:29:33.946 3008.601 - 3021.739: 96.7125% ( 44) 00:29:33.946 3021.739 - 3034.877: 96.8000% ( 60) 00:29:33.946 3034.877 - 3048.015: 96.8891% ( 61) 00:29:33.946 3048.015 - 3061.153: 96.9592% ( 48) 00:29:33.946 3061.153 - 3074.291: 97.0161% ( 39) 00:29:33.946 3074.291 - 3087.429: 97.1022% ( 59) 00:29:33.946 3087.429 - 3100.567: 97.1738% ( 49) 00:29:33.946 3100.567 - 3113.705: 97.2059% ( 22) 00:29:33.946 3113.705 - 3126.843: 97.2380% ( 22) 00:29:33.946 3126.843 - 3139.981: 97.2847% ( 32) 00:29:33.946 3139.981 - 3153.119: 97.3460% ( 42) 00:29:33.946 3153.119 - 3166.257: 97.4044% ( 40) 00:29:33.946 3166.257 - 3179.395: 97.4657% ( 42) 00:29:33.946 3179.395 - 3192.533: 97.5533% ( 60) 00:29:33.946 3192.533 - 3205.671: 97.6438% ( 62) 00:29:33.946 3205.671 - 3218.809: 97.7329% ( 61) 00:29:33.946 3218.809 - 3231.947: 97.7942% ( 42) 00:29:33.946 3231.947 - 3245.085: 97.8643% ( 48) 00:29:33.946 3245.085 - 3258.223: 97.9270% ( 43) 00:29:33.946 3258.223 - 3271.361: 98.0248% ( 67) 00:29:33.946 3271.361 - 3284.499: 98.1285% ( 71) 00:29:33.946 3284.499 - 3297.637: 98.2307% ( 70) 00:29:33.946 3297.637 - 3310.775: 98.3081% ( 53) 00:29:33.946 3310.775 - 3323.913: 98.3883% ( 55) 00:29:33.946 3323.913 - 3337.051: 98.4862% ( 67) 00:29:33.946 3337.051 - 3350.189: 98.5796% ( 64) 00:29:33.946 3350.189 - 3363.327: 98.6526% ( 50) 00:29:33.946 3363.327 - 3389.603: 98.7723% ( 82) 00:29:33.946 3389.603 - 3415.879: 98.8730% ( 69) 00:29:33.946 3415.879 - 3442.155: 98.9475% ( 51) 00:29:33.946 3442.155 - 3468.431: 98.9694% ( 15) 00:29:33.946 3468.431 - 3494.707: 99.0175% ( 33) 00:29:33.946 3494.707 - 3520.983: 99.0555% ( 26) 00:29:33.946 3520.983 - 3547.259: 99.0759% ( 14) 00:29:33.946 3547.259 - 3573.535: 99.0905% ( 10) 00:29:33.946 3573.535 - 3599.811: 99.1110% ( 14) 00:29:33.946 3599.811 - 3626.087: 99.1343% ( 16) 00:29:33.946 3626.087 - 3652.363: 99.1694% ( 24) 00:29:33.946 3652.363 - 3678.639: 99.2000% ( 21) 00:29:33.946 3678.639 - 3704.915: 99.2409% ( 28) 00:29:33.947 3704.915 - 3731.191: 99.2774% ( 25) 00:29:33.947 3731.191 - 3757.467: 99.3110% ( 23) 00:29:33.947 3757.467 - 3783.743: 99.3475% ( 25) 00:29:33.947 3783.743 - 3810.019: 99.3927% ( 31) 00:29:33.947 3810.019 - 3836.294: 99.4263% ( 23) 00:29:33.947 3836.294 - 3862.570: 99.4584% ( 22) 00:29:33.947 3862.570 - 3888.846: 99.4949% ( 25) 00:29:33.947 3888.846 - 3915.122: 99.5329% ( 26) 00:29:33.947 3915.122 - 3941.398: 99.5693% ( 25) 00:29:33.947 3941.398 - 3967.674: 99.6146% ( 31) 00:29:33.947 3967.674 - 3993.950: 99.6657% ( 35) 00:29:33.947 3993.950 - 4020.226: 99.6978% ( 22) 00:29:33.947 4020.226 - 4046.502: 99.7197% ( 15) 00:29:33.947 4046.502 - 4072.778: 99.7474% ( 19) 00:29:33.947 4072.778 - 4099.054: 99.7679% ( 14) 00:29:33.947 4099.054 - 4125.330: 99.7737% ( 4) 00:29:33.947 4125.330 - 4151.606: 99.7898% ( 11) 00:29:33.947 4151.606 - 4177.882: 99.8073% ( 12) 00:29:33.947 4177.882 - 4204.158: 99.8219% ( 10) 00:29:33.947 4204.158 - 4230.434: 99.8321% ( 7) 00:29:33.947 4230.434 - 4256.710: 99.8496% ( 12) 00:29:33.947 4256.710 - 4282.986: 99.8657% ( 11) 00:29:33.947 4388.090 - 4414.366: 99.8672% ( 1) 00:29:33.947 4466.918 - 4493.194: 99.8759% ( 6) 00:29:33.947 4493.194 - 4519.470: 99.8876% ( 8) 00:29:33.947 4650.850 - 4677.126: 99.8891% ( 1) 00:29:33.947 4834.782 - 4861.058: 99.8905% ( 1) 00:29:33.947 4966.162 - 4992.438: 99.8920% ( 1) 00:29:33.947 5018.714 - 5044.990: 99.8949% ( 2) 00:29:33.947 5071.266 - 5097.542: 99.8964% ( 1) 00:29:33.947 5123.818 - 5150.094: 99.8993% ( 2) 00:29:33.947 5176.370 - 5202.646: 99.9007% ( 1) 00:29:33.947 5255.198 - 5281.474: 99.9022% ( 1) 00:29:33.947 5334.026 - 5360.302: 99.9051% ( 2) 00:29:33.947 5360.302 - 5386.578: 99.9080% ( 2) 00:29:33.947 5412.854 - 5439.130: 99.9095% ( 1) 00:29:33.947 5465.406 - 5491.682: 99.9226% ( 9) 00:29:33.947 5491.682 - 5517.958: 99.9241% ( 1) 00:29:33.947 5544.234 - 5570.510: 99.9255% ( 1) 00:29:33.947 5570.510 - 5596.786: 99.9270% ( 1) 00:29:33.947 5675.614 - 5701.890: 99.9285% ( 1) 00:29:33.947 5912.098 - 5938.374: 99.9299% ( 1) 00:29:33.947 5990.926 - 6017.202: 99.9314% ( 1) 00:29:33.947 6069.754 - 6096.030: 99.9328% ( 1) 00:29:33.947 6096.030 - 6122.306: 99.9358% ( 2) 00:29:33.947 6253.686 - 6279.962: 99.9372% ( 1) 00:29:33.947 6306.238 - 6332.514: 99.9387% ( 1) 00:29:33.947 6516.445 - 6542.721: 99.9401% ( 1) 00:29:33.947 6726.653 - 6779.205: 99.9431% ( 2) 00:29:33.947 6884.309 - 6936.861: 99.9504% ( 5) 00:29:33.947 6936.861 - 6989.413: 99.9547% ( 3) 00:29:33.947 6989.413 - 7041.965: 99.9562% ( 1) 00:29:33.947 7567.485 - 7620.037: 99.9577% ( 1) 00:29:33.947 7777.693 - 7830.245: 99.9591% ( 1) 00:29:33.947 8040.453 - 8093.005: 99.9606% ( 1) 00:29:33.947 8093.005 - 8145.557: 99.9664% ( 4) 00:29:33.947 8460.869 - 8513.421: 99.9737% ( 5) 00:29:33.947 8513.421 - 8565.973: 99.9869% ( 9) 00:29:33.947 8776.181 - 8828.733: 99.9883% ( 1) 00:29:33.947 8933.836 - 8986.388: 99.9898% ( 1) 00:29:33.947 9144.044 - 9196.596: 99.9942% ( 3) 00:29:33.947 9354.252 - 9406.804: 99.9971% ( 2) 00:29:33.947 9932.324 - 9984.876: 99.9985% ( 1) 00:29:33.947 10247.636 - 10300.188: 100.0000% ( 1) 00:29:33.947 00:29:33.947 09:56:01 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:29:33.947 00:29:33.947 real 0m4.166s 00:29:33.947 user 0m2.593s 00:29:33.947 sys 0m1.569s 00:29:33.947 ************************************ 00:29:33.947 END TEST nvme_perf 00:29:33.947 ************************************ 00:29:33.947 09:56:01 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:33.947 09:56:01 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:29:33.947 09:56:02 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:33.947 09:56:02 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:33.947 09:56:02 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:33.947 09:56:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:33.947 09:56:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.947 ************************************ 00:29:33.947 START TEST nvme_hello_world 00:29:33.947 ************************************ 00:29:33.947 09:56:02 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:34.912 EAL: TSC is not safe to use in SMP mode 00:29:34.912 EAL: TSC is not invariant 00:29:34.912 [2024-07-15 09:56:02.792407] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:34.912 Initializing NVMe Controllers 00:29:34.912 Attaching to 0000:00:10.0 00:29:34.912 Attached to 0000:00:10.0 00:29:34.912 Namespace ID: 1 size: 5GB 00:29:34.912 Initialization complete. 00:29:34.912 INFO: using host memory buffer for IO 00:29:34.912 Hello world! 00:29:34.912 00:29:34.912 real 0m0.821s 00:29:34.912 user 0m0.008s 00:29:34.912 sys 0m0.813s 00:29:34.912 09:56:02 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:34.912 09:56:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:34.912 ************************************ 00:29:34.912 END TEST nvme_hello_world 00:29:34.912 ************************************ 00:29:34.912 09:56:02 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:34.912 09:56:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:34.912 09:56:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:34.912 09:56:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.912 09:56:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:34.912 ************************************ 00:29:34.912 START TEST nvme_sgl 00:29:34.912 ************************************ 00:29:34.912 09:56:02 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:35.846 EAL: TSC is not safe to use in SMP mode 00:29:35.846 EAL: TSC is not invariant 00:29:35.846 [2024-07-15 09:56:03.667480] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:35.846 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:29:35.846 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:29:35.846 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:29:35.846 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:29:35.846 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:29:35.846 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:29:35.846 NVMe Readv/Writev Request test 00:29:35.846 Attaching to 0000:00:10.0 00:29:35.846 Attached to 0000:00:10.0 00:29:35.846 0000:00:10.0: build_io_request_2 test passed 00:29:35.846 0000:00:10.0: build_io_request_4 test passed 00:29:35.846 0000:00:10.0: build_io_request_5 test passed 00:29:35.846 0000:00:10.0: build_io_request_6 test passed 00:29:35.846 0000:00:10.0: build_io_request_7 test passed 00:29:35.846 0000:00:10.0: build_io_request_10 test passed 00:29:35.846 Cleaning up... 00:29:35.846 00:29:35.846 real 0m0.824s 00:29:35.846 user 0m0.016s 00:29:35.846 sys 0m0.808s 00:29:35.846 09:56:03 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:35.846 ************************************ 00:29:35.846 END TEST nvme_sgl 00:29:35.846 ************************************ 00:29:35.846 09:56:03 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 09:56:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:35.846 09:56:03 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:35.846 09:56:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.846 09:56:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.846 09:56:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 ************************************ 00:29:35.846 START TEST nvme_e2edp 00:29:35.846 ************************************ 00:29:35.846 09:56:03 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:36.781 EAL: TSC is not safe to use in SMP mode 00:29:36.781 EAL: TSC is not invariant 00:29:36.781 [2024-07-15 09:56:04.535710] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:36.781 NVMe Write/Read with End-to-End data protection test 00:29:36.781 Attaching to 0000:00:10.0 00:29:36.781 Attached to 0000:00:10.0 00:29:36.781 Cleaning up... 00:29:36.781 00:29:36.781 real 0m0.815s 00:29:36.781 user 0m0.015s 00:29:36.781 sys 0m0.799s 00:29:36.781 09:56:04 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.781 ************************************ 00:29:36.781 END TEST nvme_e2edp 00:29:36.781 ************************************ 00:29:36.781 09:56:04 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:29:36.781 09:56:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:36.781 09:56:04 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:36.781 09:56:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:36.781 09:56:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.781 09:56:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:36.781 ************************************ 00:29:36.781 START TEST nvme_reserve 00:29:36.781 ************************************ 00:29:36.781 09:56:04 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:37.040 EAL: TSC is not safe to use in SMP mode 00:29:37.040 EAL: TSC is not invariant 00:29:37.040 [2024-07-15 09:56:05.076874] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:37.040 ===================================================== 00:29:37.040 NVMe Controller at PCI bus 0, device 16, function 0 00:29:37.040 ===================================================== 00:29:37.040 Reservations: Not Supported 00:29:37.040 Reservation test passed 00:29:37.040 00:29:37.040 real 0m0.474s 00:29:37.040 user 0m0.014s 00:29:37.040 sys 0m0.459s 00:29:37.040 09:56:05 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.040 09:56:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:29:37.040 ************************************ 00:29:37.040 END TEST nvme_reserve 00:29:37.040 ************************************ 00:29:37.300 09:56:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:37.300 09:56:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:37.300 09:56:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:37.300 09:56:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.300 09:56:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:37.300 ************************************ 00:29:37.300 START TEST nvme_err_injection 00:29:37.300 ************************************ 00:29:37.300 09:56:05 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:37.867 EAL: TSC is not safe to use in SMP mode 00:29:37.867 EAL: TSC is not invariant 00:29:37.867 [2024-07-15 09:56:05.926162] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:38.127 NVMe Error Injection test 00:29:38.127 Attaching to 0000:00:10.0 00:29:38.127 Attached to 0000:00:10.0 00:29:38.127 0000:00:10.0: get features failed as expected 00:29:38.127 0000:00:10.0: get features successfully as expected 00:29:38.127 0000:00:10.0: read failed as expected 00:29:38.127 0000:00:10.0: read successfully as expected 00:29:38.127 Cleaning up... 00:29:38.127 00:29:38.127 real 0m0.820s 00:29:38.127 user 0m0.017s 00:29:38.127 sys 0m0.802s 00:29:38.127 09:56:05 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:38.127 09:56:05 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:29:38.127 ************************************ 00:29:38.127 END TEST nvme_err_injection 00:29:38.127 ************************************ 00:29:38.127 09:56:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:38.127 09:56:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:38.127 09:56:06 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:29:38.127 09:56:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.127 09:56:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:38.127 ************************************ 00:29:38.127 START TEST nvme_overhead 00:29:38.127 ************************************ 00:29:38.127 09:56:06 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:38.695 EAL: TSC is not safe to use in SMP mode 00:29:38.695 EAL: TSC is not invariant 00:29:38.695 [2024-07-15 09:56:06.786582] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:40.109 Initializing NVMe Controllers 00:29:40.109 Attaching to 0000:00:10.0 00:29:40.109 Attached to 0000:00:10.0 00:29:40.109 Initialization complete. Launching workers. 00:29:40.109 submit (in ns) avg, min, max = 9933.8, 8317.9, 181891.9 00:29:40.109 complete (in ns) avg, min, max = 7149.5, 6081.5, 60582.0 00:29:40.109 00:29:40.109 Submit histogram 00:29:40.109 ================ 00:29:40.109 Range in us Cumulative Count 00:29:40.109 8.314 - 8.365: 0.0486% ( 5) 00:29:40.109 8.365 - 8.417: 0.1748% ( 13) 00:29:40.109 8.417 - 8.468: 0.5342% ( 37) 00:29:40.109 8.468 - 8.519: 1.3502% ( 84) 00:29:40.109 8.519 - 8.570: 2.4964% ( 118) 00:29:40.109 8.570 - 8.622: 3.6717% ( 121) 00:29:40.109 8.622 - 8.673: 4.8276% ( 119) 00:29:40.109 8.673 - 8.724: 6.1098% ( 132) 00:29:40.109 8.724 - 8.776: 8.0525% ( 200) 00:29:40.109 8.776 - 8.827: 10.4420% ( 246) 00:29:40.109 8.827 - 8.878: 13.1229% ( 276) 00:29:40.109 8.878 - 8.930: 16.0855% ( 305) 00:29:40.109 8.930 - 8.981: 20.2817% ( 432) 00:29:40.109 8.981 - 9.032: 26.4012% ( 630) 00:29:40.109 9.032 - 9.084: 33.9971% ( 782) 00:29:40.109 9.084 - 9.135: 41.9233% ( 816) 00:29:40.109 9.135 - 9.186: 49.2181% ( 751) 00:29:40.109 9.186 - 9.238: 55.2404% ( 620) 00:29:40.109 9.238 - 9.289: 60.7091% ( 563) 00:29:40.109 9.289 - 9.340: 64.8373% ( 425) 00:29:40.109 9.340 - 9.392: 68.5381% ( 381) 00:29:40.109 9.392 - 9.443: 71.4036% ( 295) 00:29:40.109 9.443 - 9.494: 73.7542% ( 242) 00:29:40.109 9.494 - 9.546: 75.5998% ( 190) 00:29:40.109 9.546 - 9.597: 77.3968% ( 185) 00:29:40.109 9.597 - 9.648: 78.8732% ( 152) 00:29:40.109 9.648 - 9.700: 79.9320% ( 109) 00:29:40.109 9.700 - 9.751: 80.8839% ( 98) 00:29:40.109 9.751 - 9.802: 81.8747% ( 102) 00:29:40.109 9.802 - 9.853: 82.7489% ( 90) 00:29:40.109 9.853 - 9.905: 83.6425% ( 92) 00:29:40.109 9.905 - 9.956: 84.6236% ( 101) 00:29:40.109 9.956 - 10.007: 85.4784% ( 88) 00:29:40.109 10.007 - 10.059: 86.5566% ( 111) 00:29:40.109 10.059 - 10.110: 87.2657% ( 73) 00:29:40.109 10.110 - 10.161: 87.9165% ( 67) 00:29:40.109 10.161 - 10.213: 88.4701% ( 57) 00:29:40.109 10.213 - 10.264: 88.9558% ( 50) 00:29:40.109 10.264 - 10.315: 89.4026% ( 46) 00:29:40.109 10.315 - 10.367: 89.7329% ( 34) 00:29:40.109 10.367 - 10.418: 89.9951% ( 27) 00:29:40.109 10.418 - 10.469: 90.2477% ( 26) 00:29:40.109 10.469 - 10.521: 90.5002% ( 26) 00:29:40.109 10.521 - 10.572: 90.7237% ( 23) 00:29:40.109 10.572 - 10.623: 90.7722% ( 5) 00:29:40.109 10.623 - 10.675: 90.8791% ( 11) 00:29:40.109 10.675 - 10.726: 90.9665% ( 9) 00:29:40.109 10.726 - 10.777: 91.0345% ( 7) 00:29:40.109 10.777 - 10.829: 91.1122% ( 8) 00:29:40.109 10.829 - 10.880: 91.1413% ( 3) 00:29:40.109 10.880 - 10.931: 91.1705% ( 3) 00:29:40.109 10.931 - 10.983: 91.1996% ( 3) 00:29:40.109 10.983 - 11.034: 91.2676% ( 7) 00:29:40.109 11.034 - 11.085: 91.3453% ( 8) 00:29:40.109 11.085 - 11.137: 91.4230% ( 8) 00:29:40.109 11.137 - 11.188: 91.5299% ( 11) 00:29:40.109 11.188 - 11.239: 91.6173% ( 9) 00:29:40.109 11.239 - 11.290: 91.7630% ( 15) 00:29:40.109 11.290 - 11.342: 91.9864% ( 23) 00:29:40.109 11.342 - 11.393: 92.0738% ( 9) 00:29:40.109 11.393 - 11.444: 92.2778% ( 21) 00:29:40.109 11.444 - 11.496: 92.4721% ( 20) 00:29:40.109 11.496 - 11.547: 92.5983% ( 13) 00:29:40.109 11.547 - 11.598: 92.6761% ( 8) 00:29:40.109 11.598 - 11.650: 92.7635% ( 9) 00:29:40.109 11.650 - 11.701: 92.8703% ( 11) 00:29:40.109 11.701 - 11.752: 92.9092% ( 4) 00:29:40.109 11.752 - 11.804: 92.9772% ( 7) 00:29:40.109 11.804 - 11.855: 93.0160% ( 4) 00:29:40.109 11.855 - 11.906: 93.0257% ( 1) 00:29:40.109 11.906 - 11.958: 93.0646% ( 4) 00:29:40.109 11.958 - 12.009: 93.0743% ( 1) 00:29:40.109 12.009 - 12.060: 93.1034% ( 3) 00:29:40.109 12.060 - 12.112: 93.1326% ( 3) 00:29:40.109 12.112 - 12.163: 93.1520% ( 2) 00:29:40.109 12.214 - 12.266: 93.1812% ( 3) 00:29:40.109 12.266 - 12.317: 93.2103% ( 3) 00:29:40.109 12.368 - 12.420: 93.2297% ( 2) 00:29:40.109 12.420 - 12.471: 93.2589% ( 3) 00:29:40.109 12.471 - 12.522: 93.2783% ( 2) 00:29:40.109 12.522 - 12.573: 93.2977% ( 2) 00:29:40.109 12.573 - 12.625: 93.3463% ( 5) 00:29:40.109 12.625 - 12.676: 93.4143% ( 7) 00:29:40.109 12.676 - 12.727: 93.4726% ( 6) 00:29:40.109 12.727 - 12.779: 93.5114% ( 4) 00:29:40.109 12.779 - 12.830: 93.5794% ( 7) 00:29:40.109 12.830 - 12.881: 93.6280% ( 5) 00:29:40.109 12.881 - 12.933: 93.6863% ( 6) 00:29:40.109 12.933 - 12.984: 93.7057% ( 2) 00:29:40.109 12.984 - 13.035: 93.7445% ( 4) 00:29:40.109 13.035 - 13.087: 93.8028% ( 6) 00:29:40.109 13.087 - 13.138: 93.8222% ( 2) 00:29:40.109 13.138 - 13.241: 93.9000% ( 8) 00:29:40.109 13.241 - 13.343: 93.9777% ( 8) 00:29:40.109 13.343 - 13.446: 94.0942% ( 12) 00:29:40.109 13.446 - 13.549: 94.2011% ( 11) 00:29:40.109 13.549 - 13.651: 94.3273% ( 13) 00:29:40.109 13.651 - 13.754: 94.4245% ( 10) 00:29:40.109 13.754 - 13.856: 94.5410% ( 12) 00:29:40.109 13.856 - 13.959: 94.6576% ( 12) 00:29:40.109 13.959 - 14.062: 94.7839% ( 13) 00:29:40.109 14.062 - 14.164: 94.8907% ( 11) 00:29:40.109 14.164 - 14.267: 95.0170% ( 13) 00:29:40.109 14.267 - 14.370: 95.1044% ( 9) 00:29:40.109 14.370 - 14.472: 95.2016% ( 10) 00:29:40.109 14.472 - 14.575: 95.2987% ( 10) 00:29:40.109 14.575 - 14.678: 95.3570% ( 6) 00:29:40.109 14.678 - 14.780: 95.4055% ( 5) 00:29:40.109 14.780 - 14.883: 95.4444% ( 4) 00:29:40.109 14.883 - 14.986: 95.5610% ( 12) 00:29:40.109 14.986 - 15.088: 95.7164% ( 16) 00:29:40.109 15.088 - 15.191: 95.9495% ( 24) 00:29:40.109 15.191 - 15.293: 96.1923% ( 25) 00:29:40.109 15.293 - 15.396: 96.4157% ( 23) 00:29:40.109 15.396 - 15.499: 96.5420% ( 13) 00:29:40.110 15.499 - 15.601: 96.6294% ( 9) 00:29:40.110 15.601 - 15.704: 96.7071% ( 8) 00:29:40.110 15.704 - 15.807: 96.8237% ( 12) 00:29:40.110 15.807 - 15.909: 96.8820% ( 6) 00:29:40.110 15.909 - 16.012: 96.8917% ( 1) 00:29:40.110 16.012 - 16.115: 96.9597% ( 7) 00:29:40.110 16.115 - 16.217: 96.9888% ( 3) 00:29:40.110 16.217 - 16.320: 97.0180% ( 3) 00:29:40.110 16.320 - 16.422: 97.0471% ( 3) 00:29:40.110 16.422 - 16.525: 97.0860% ( 4) 00:29:40.110 16.525 - 16.628: 97.1151% ( 3) 00:29:40.110 16.628 - 16.730: 97.1540% ( 4) 00:29:40.110 16.730 - 16.833: 97.1637% ( 1) 00:29:40.110 16.833 - 16.936: 97.1734% ( 1) 00:29:40.110 17.038 - 17.141: 97.1831% ( 1) 00:29:40.110 17.141 - 17.244: 97.2122% ( 3) 00:29:40.110 17.244 - 17.346: 97.2317% ( 2) 00:29:40.110 17.346 - 17.449: 97.2511% ( 2) 00:29:40.110 17.449 - 17.552: 97.2608% ( 1) 00:29:40.110 17.654 - 17.757: 97.2802% ( 2) 00:29:40.110 17.962 - 18.065: 97.3288% ( 5) 00:29:40.110 18.065 - 18.167: 97.4648% ( 14) 00:29:40.110 18.167 - 18.270: 97.7368% ( 28) 00:29:40.110 18.270 - 18.373: 97.8825% ( 15) 00:29:40.110 18.373 - 18.475: 98.0476% ( 17) 00:29:40.110 18.475 - 18.578: 98.1253% ( 8) 00:29:40.110 18.578 - 18.681: 98.1933% ( 7) 00:29:40.110 18.681 - 18.783: 98.2127% ( 2) 00:29:40.110 18.783 - 18.886: 98.2322% ( 2) 00:29:40.110 18.886 - 18.989: 98.2516% ( 2) 00:29:40.110 19.091 - 19.194: 98.2613% ( 1) 00:29:40.110 19.502 - 19.604: 98.2807% ( 2) 00:29:40.110 19.604 - 19.707: 98.2904% ( 1) 00:29:40.110 20.118 - 20.220: 98.3001% ( 1) 00:29:40.110 20.425 - 20.528: 98.3099% ( 1) 00:29:40.110 20.733 - 20.836: 98.3196% ( 1) 00:29:40.110 21.041 - 21.144: 98.3293% ( 1) 00:29:40.110 21.144 - 21.247: 98.3487% ( 2) 00:29:40.110 21.247 - 21.349: 98.3681% ( 2) 00:29:40.110 21.349 - 21.452: 98.4556% ( 9) 00:29:40.110 21.452 - 21.555: 98.5624% ( 11) 00:29:40.110 21.555 - 21.657: 98.6207% ( 6) 00:29:40.110 21.657 - 21.760: 98.6887% ( 7) 00:29:40.110 21.760 - 21.862: 98.7373% ( 5) 00:29:40.110 21.862 - 21.965: 98.7664% ( 3) 00:29:40.110 21.965 - 22.068: 98.7761% ( 1) 00:29:40.110 22.068 - 22.170: 98.7858% ( 1) 00:29:40.110 22.170 - 22.273: 98.8052% ( 2) 00:29:40.110 22.273 - 22.376: 98.8150% ( 1) 00:29:40.110 23.402 - 23.505: 98.8247% ( 1) 00:29:40.110 24.223 - 24.326: 98.8344% ( 1) 00:29:40.110 24.428 - 24.531: 98.8441% ( 1) 00:29:40.110 24.634 - 24.736: 98.8538% ( 1) 00:29:40.110 25.352 - 25.455: 98.8635% ( 1) 00:29:40.110 25.455 - 25.558: 98.8732% ( 1) 00:29:40.110 25.558 - 25.660: 98.8830% ( 1) 00:29:40.110 25.660 - 25.763: 98.8927% ( 1) 00:29:40.110 25.763 - 25.865: 98.9315% ( 4) 00:29:40.110 25.865 - 25.968: 98.9509% ( 2) 00:29:40.110 25.968 - 26.071: 98.9704% ( 2) 00:29:40.110 26.071 - 26.173: 99.0481% ( 8) 00:29:40.110 26.173 - 26.276: 99.2035% ( 16) 00:29:40.110 26.276 - 26.481: 99.3492% ( 15) 00:29:40.110 26.481 - 26.687: 99.4463% ( 10) 00:29:40.110 26.687 - 26.892: 99.5046% ( 6) 00:29:40.110 26.892 - 27.097: 99.5338% ( 3) 00:29:40.110 27.302 - 27.508: 99.5920% ( 6) 00:29:40.110 27.508 - 27.713: 99.6017% ( 1) 00:29:40.110 28.124 - 28.329: 99.6212% ( 2) 00:29:40.110 36.745 - 36.951: 99.6309% ( 1) 00:29:40.110 36.951 - 37.156: 99.6406% ( 1) 00:29:40.110 37.156 - 37.361: 99.6503% ( 1) 00:29:40.110 37.361 - 37.566: 99.6600% ( 1) 00:29:40.110 37.566 - 37.772: 99.6697% ( 1) 00:29:40.110 37.977 - 38.182: 99.6795% ( 1) 00:29:40.110 39.003 - 39.209: 99.7086% ( 3) 00:29:40.110 39.825 - 40.030: 99.7183% ( 1) 00:29:40.110 40.030 - 40.235: 99.7377% ( 2) 00:29:40.110 40.235 - 40.440: 99.7475% ( 1) 00:29:40.110 40.440 - 40.646: 99.7669% ( 2) 00:29:40.110 40.646 - 40.851: 99.7766% ( 1) 00:29:40.110 41.262 - 41.467: 99.7863% ( 1) 00:29:40.110 41.672 - 41.877: 99.8057% ( 2) 00:29:40.110 41.877 - 42.083: 99.8154% ( 1) 00:29:40.110 42.083 - 42.288: 99.8252% ( 1) 00:29:40.110 42.288 - 42.493: 99.8349% ( 1) 00:29:40.110 42.493 - 42.698: 99.8446% ( 1) 00:29:40.110 42.698 - 42.904: 99.8640% ( 2) 00:29:40.110 42.904 - 43.109: 99.8834% ( 2) 00:29:40.110 43.109 - 43.314: 99.9223% ( 4) 00:29:40.110 43.314 - 43.520: 99.9320% ( 1) 00:29:40.110 44.135 - 44.341: 99.9417% ( 1) 00:29:40.110 45.162 - 45.367: 99.9514% ( 1) 00:29:40.110 45.778 - 45.983: 99.9611% ( 1) 00:29:40.110 60.353 - 60.763: 99.9709% ( 1) 00:29:40.110 60.763 - 61.174: 99.9806% ( 1) 00:29:40.110 110.031 - 110.852: 99.9903% ( 1) 00:29:40.110 181.469 - 182.290: 100.0000% ( 1) 00:29:40.110 00:29:40.110 Complete histogram 00:29:40.110 ================== 00:29:40.110 Range in us Cumulative Count 00:29:40.110 6.081 - 6.107: 0.0777% ( 8) 00:29:40.110 6.107 - 6.133: 0.6217% ( 56) 00:29:40.110 6.133 - 6.158: 2.4478% ( 188) 00:29:40.110 6.158 - 6.184: 5.8086% ( 346) 00:29:40.110 6.184 - 6.210: 9.5386% ( 384) 00:29:40.110 6.210 - 6.235: 13.5600% ( 414) 00:29:40.110 6.235 - 6.261: 17.6591% ( 422) 00:29:40.110 6.261 - 6.287: 21.3890% ( 384) 00:29:40.110 6.287 - 6.312: 25.0121% ( 373) 00:29:40.110 6.312 - 6.338: 28.3439% ( 343) 00:29:40.110 6.338 - 6.364: 31.1025% ( 284) 00:29:40.110 6.364 - 6.389: 33.4920% ( 246) 00:29:40.110 6.389 - 6.415: 35.8524% ( 243) 00:29:40.110 6.415 - 6.441: 38.5430% ( 277) 00:29:40.110 6.441 - 6.466: 40.9325% ( 246) 00:29:40.110 6.466 - 6.492: 43.2152% ( 235) 00:29:40.110 6.492 - 6.518: 45.4298% ( 228) 00:29:40.110 6.518 - 6.543: 48.0427% ( 269) 00:29:40.110 6.543 - 6.569: 50.3643% ( 239) 00:29:40.110 6.569 - 6.620: 54.4439% ( 420) 00:29:40.110 6.620 - 6.672: 59.0578% ( 475) 00:29:40.110 6.672 - 6.723: 62.9917% ( 405) 00:29:40.110 6.723 - 6.774: 67.2365% ( 437) 00:29:40.110 6.774 - 6.826: 70.8111% ( 368) 00:29:40.110 6.826 - 6.877: 74.0457% ( 333) 00:29:40.110 6.877 - 6.928: 76.6294% ( 266) 00:29:40.110 6.928 - 6.980: 78.8635% ( 230) 00:29:40.110 6.980 - 7.031: 81.1073% ( 231) 00:29:40.110 7.031 - 7.082: 82.6421% ( 158) 00:29:40.110 7.082 - 7.134: 83.9048% ( 130) 00:29:40.110 7.134 - 7.185: 84.9150% ( 104) 00:29:40.110 7.185 - 7.236: 85.8281% ( 94) 00:29:40.110 7.236 - 7.287: 86.4983% ( 69) 00:29:40.110 7.287 - 7.339: 87.1394% ( 66) 00:29:40.110 7.339 - 7.390: 87.7513% ( 63) 00:29:40.110 7.390 - 7.441: 88.2079% ( 47) 00:29:40.110 7.441 - 7.493: 88.6935% ( 50) 00:29:40.110 7.493 - 7.544: 88.9267% ( 24) 00:29:40.110 7.544 - 7.595: 89.2472% ( 33) 00:29:40.110 7.595 - 7.647: 89.4123% ( 17) 00:29:40.110 7.647 - 7.698: 89.6260% ( 22) 00:29:40.110 7.698 - 7.749: 89.7814% ( 16) 00:29:40.110 7.749 - 7.801: 89.9660% ( 19) 00:29:40.110 7.801 - 7.852: 90.0923% ( 13) 00:29:40.110 7.852 - 7.903: 90.2283% ( 14) 00:29:40.110 7.903 - 7.955: 90.3740% ( 15) 00:29:40.111 7.955 - 8.006: 90.5780% ( 21) 00:29:40.111 8.006 - 8.057: 90.9179% ( 35) 00:29:40.111 8.057 - 8.109: 91.2093% ( 30) 00:29:40.111 8.109 - 8.160: 91.4522% ( 25) 00:29:40.111 8.160 - 8.211: 91.6076% ( 16) 00:29:40.111 8.211 - 8.263: 91.7339% ( 13) 00:29:40.111 8.263 - 8.314: 91.8990% ( 17) 00:29:40.111 8.314 - 8.365: 92.0350% ( 14) 00:29:40.111 8.365 - 8.417: 92.2292% ( 20) 00:29:40.111 8.417 - 8.468: 92.3652% ( 14) 00:29:40.111 8.468 - 8.519: 92.4721% ( 11) 00:29:40.111 8.519 - 8.570: 92.5983% ( 13) 00:29:40.111 8.570 - 8.622: 92.7538% ( 16) 00:29:40.111 8.622 - 8.673: 92.8412% ( 9) 00:29:40.111 8.673 - 8.724: 92.9577% ( 12) 00:29:40.111 8.724 - 8.776: 93.0743% ( 12) 00:29:40.111 8.776 - 8.827: 93.1229% ( 5) 00:29:40.111 8.827 - 8.878: 93.2589% ( 14) 00:29:40.111 8.878 - 8.930: 93.3560% ( 10) 00:29:40.111 8.930 - 8.981: 93.4143% ( 6) 00:29:40.111 8.981 - 9.032: 93.4726% ( 6) 00:29:40.111 9.032 - 9.084: 93.5988% ( 13) 00:29:40.111 9.084 - 9.135: 93.6474% ( 5) 00:29:40.111 9.135 - 9.186: 93.7348% ( 9) 00:29:40.111 9.186 - 9.238: 93.7834% ( 5) 00:29:40.111 9.238 - 9.289: 93.8514% ( 7) 00:29:40.111 9.289 - 9.340: 93.8902% ( 4) 00:29:40.111 9.340 - 9.392: 93.9582% ( 7) 00:29:40.111 9.392 - 9.443: 94.0165% ( 6) 00:29:40.111 9.443 - 9.494: 94.0457% ( 3) 00:29:40.111 9.494 - 9.546: 94.1234% ( 8) 00:29:40.111 9.546 - 9.597: 94.1719% ( 5) 00:29:40.111 9.597 - 9.648: 94.2205% ( 5) 00:29:40.111 9.648 - 9.700: 94.2399% ( 2) 00:29:40.111 9.700 - 9.751: 94.3079% ( 7) 00:29:40.111 9.751 - 9.802: 94.3565% ( 5) 00:29:40.111 9.802 - 9.853: 94.3953% ( 4) 00:29:40.111 9.853 - 9.905: 94.4051% ( 1) 00:29:40.111 9.905 - 9.956: 94.4828% ( 8) 00:29:40.111 9.956 - 10.007: 94.5410% ( 6) 00:29:40.111 10.007 - 10.059: 94.5605% ( 2) 00:29:40.111 10.059 - 10.110: 94.6382% ( 8) 00:29:40.111 10.110 - 10.161: 94.7159% ( 8) 00:29:40.111 10.161 - 10.213: 94.7644% ( 5) 00:29:40.111 10.213 - 10.264: 94.8422% ( 8) 00:29:40.111 10.264 - 10.315: 94.9004% ( 6) 00:29:40.111 10.315 - 10.367: 94.9199% ( 2) 00:29:40.111 10.367 - 10.418: 94.9684% ( 5) 00:29:40.111 10.418 - 10.469: 95.0364% ( 7) 00:29:40.111 10.469 - 10.521: 95.0753% ( 4) 00:29:40.111 10.521 - 10.572: 95.1238% ( 5) 00:29:40.111 10.572 - 10.623: 95.1724% ( 5) 00:29:40.111 10.623 - 10.675: 95.2210% ( 5) 00:29:40.111 10.675 - 10.726: 95.2890% ( 7) 00:29:40.111 10.726 - 10.777: 95.3375% ( 5) 00:29:40.111 10.777 - 10.829: 95.3667% ( 3) 00:29:40.111 10.829 - 10.880: 95.4444% ( 8) 00:29:40.111 10.880 - 10.931: 95.5124% ( 7) 00:29:40.111 10.931 - 10.983: 95.5804% ( 7) 00:29:40.111 10.983 - 11.034: 95.6095% ( 3) 00:29:40.111 11.034 - 11.085: 95.6484% ( 4) 00:29:40.111 11.085 - 11.137: 95.6969% ( 5) 00:29:40.111 11.137 - 11.188: 95.7455% ( 5) 00:29:40.111 11.188 - 11.239: 95.7941% ( 5) 00:29:40.111 11.239 - 11.290: 95.8038% ( 1) 00:29:40.111 11.342 - 11.393: 95.8329% ( 3) 00:29:40.111 11.393 - 11.444: 95.8621% ( 3) 00:29:40.111 11.496 - 11.547: 95.9009% ( 4) 00:29:40.111 11.547 - 11.598: 95.9689% ( 7) 00:29:40.111 11.598 - 11.650: 96.0758% ( 11) 00:29:40.111 11.650 - 11.701: 96.1535% ( 8) 00:29:40.111 11.701 - 11.752: 96.2215% ( 7) 00:29:40.111 11.752 - 11.804: 96.2700% ( 5) 00:29:40.111 11.804 - 11.855: 96.3575% ( 9) 00:29:40.111 11.855 - 11.906: 96.4643% ( 11) 00:29:40.111 11.906 - 11.958: 96.5226% ( 6) 00:29:40.111 11.958 - 12.009: 96.6294% ( 11) 00:29:40.111 12.009 - 12.060: 96.6683% ( 4) 00:29:40.111 12.060 - 12.112: 96.7266% ( 6) 00:29:40.111 12.112 - 12.163: 96.7654% ( 4) 00:29:40.111 12.163 - 12.214: 96.7751% ( 1) 00:29:40.111 12.214 - 12.266: 96.8237% ( 5) 00:29:40.111 12.266 - 12.317: 96.8431% ( 2) 00:29:40.111 12.317 - 12.368: 96.8723% ( 3) 00:29:40.111 12.368 - 12.420: 96.9014% ( 3) 00:29:40.111 12.420 - 12.471: 96.9111% ( 1) 00:29:40.111 12.471 - 12.522: 96.9208% ( 1) 00:29:40.111 12.676 - 12.727: 96.9403% ( 2) 00:29:40.111 12.779 - 12.830: 96.9500% ( 1) 00:29:40.111 12.830 - 12.881: 96.9694% ( 2) 00:29:40.111 12.984 - 13.035: 96.9888% ( 2) 00:29:40.111 13.087 - 13.138: 97.0957% ( 11) 00:29:40.111 13.138 - 13.241: 97.3191% ( 23) 00:29:40.111 13.241 - 13.343: 97.5134% ( 20) 00:29:40.111 13.343 - 13.446: 97.6396% ( 13) 00:29:40.111 13.446 - 13.549: 97.7659% ( 13) 00:29:40.111 13.549 - 13.651: 97.8339% ( 7) 00:29:40.111 13.651 - 13.754: 97.8825% ( 5) 00:29:40.111 13.754 - 13.856: 97.9116% ( 3) 00:29:40.111 13.856 - 13.959: 97.9505% ( 4) 00:29:40.111 13.959 - 14.062: 97.9796% ( 3) 00:29:40.111 14.062 - 14.164: 98.0087% ( 3) 00:29:40.111 14.267 - 14.370: 98.0185% ( 1) 00:29:40.111 14.472 - 14.575: 98.0379% ( 2) 00:29:40.111 14.780 - 14.883: 98.0476% ( 1) 00:29:40.111 14.883 - 14.986: 98.0573% ( 1) 00:29:40.111 14.986 - 15.088: 98.0670% ( 1) 00:29:40.111 15.088 - 15.191: 98.0767% ( 1) 00:29:40.111 15.396 - 15.499: 98.0864% ( 1) 00:29:40.111 15.499 - 15.601: 98.1059% ( 2) 00:29:40.111 16.217 - 16.320: 98.1156% ( 1) 00:29:40.111 16.525 - 16.628: 98.1253% ( 1) 00:29:40.111 16.628 - 16.730: 98.1739% ( 5) 00:29:40.111 16.730 - 16.833: 98.2224% ( 5) 00:29:40.111 16.833 - 16.936: 98.3196% ( 10) 00:29:40.111 16.936 - 17.038: 98.3973% ( 8) 00:29:40.111 17.038 - 17.141: 98.4361% ( 4) 00:29:40.111 17.141 - 17.244: 98.5041% ( 7) 00:29:40.111 17.244 - 17.346: 98.5430% ( 4) 00:29:40.111 17.346 - 17.449: 98.5527% ( 1) 00:29:40.111 17.449 - 17.552: 98.5721% ( 2) 00:29:40.111 17.552 - 17.654: 98.5818% ( 1) 00:29:40.111 17.757 - 17.859: 98.5915% ( 1) 00:29:40.111 18.065 - 18.167: 98.6013% ( 1) 00:29:40.111 18.167 - 18.270: 98.6207% ( 2) 00:29:40.111 18.373 - 18.475: 98.6498% ( 3) 00:29:40.111 18.578 - 18.681: 98.6790% ( 3) 00:29:40.111 18.681 - 18.783: 98.6984% ( 2) 00:29:40.111 18.886 - 18.989: 98.7373% ( 4) 00:29:40.111 18.989 - 19.091: 98.8247% ( 9) 00:29:40.112 19.091 - 19.194: 98.9121% ( 9) 00:29:40.112 19.194 - 19.296: 99.0384% ( 13) 00:29:40.112 19.296 - 19.399: 99.1064% ( 7) 00:29:40.112 19.399 - 19.502: 99.1646% ( 6) 00:29:40.112 19.502 - 19.604: 99.2424% ( 8) 00:29:40.112 19.604 - 19.707: 99.2909% ( 5) 00:29:40.112 19.707 - 19.810: 99.3395% ( 5) 00:29:40.112 19.810 - 19.912: 99.3492% ( 1) 00:29:40.112 19.912 - 20.015: 99.3686% ( 2) 00:29:40.112 20.015 - 20.118: 99.3783% ( 1) 00:29:40.112 20.220 - 20.323: 99.3881% ( 1) 00:29:40.112 20.323 - 20.425: 99.3978% ( 1) 00:29:40.112 20.631 - 20.733: 99.4172% ( 2) 00:29:40.112 20.836 - 20.939: 99.4269% ( 1) 00:29:40.112 20.939 - 21.041: 99.4366% ( 1) 00:29:40.112 21.041 - 21.144: 99.4463% ( 1) 00:29:40.112 21.144 - 21.247: 99.4560% ( 1) 00:29:40.112 21.247 - 21.349: 99.4755% ( 2) 00:29:40.112 21.452 - 21.555: 99.4852% ( 1) 00:29:40.112 21.555 - 21.657: 99.4949% ( 1) 00:29:40.112 22.068 - 22.170: 99.5046% ( 1) 00:29:40.112 22.273 - 22.376: 99.5143% ( 1) 00:29:40.112 22.581 - 22.684: 99.5338% ( 2) 00:29:40.112 23.915 - 24.018: 99.5435% ( 1) 00:29:40.112 24.326 - 24.428: 99.5629% ( 2) 00:29:40.112 24.634 - 24.736: 99.5726% ( 1) 00:29:40.112 25.558 - 25.660: 99.5823% ( 1) 00:29:40.112 27.302 - 27.508: 99.6017% ( 2) 00:29:40.112 27.508 - 27.713: 99.6115% ( 1) 00:29:40.112 27.713 - 27.918: 99.6212% ( 1) 00:29:40.112 27.918 - 28.124: 99.6309% ( 1) 00:29:40.112 28.329 - 28.534: 99.6406% ( 1) 00:29:40.112 28.534 - 28.739: 99.6600% ( 2) 00:29:40.112 28.739 - 28.945: 99.6697% ( 1) 00:29:40.112 28.945 - 29.150: 99.6892% ( 2) 00:29:40.112 29.355 - 29.560: 99.6989% ( 1) 00:29:40.112 29.560 - 29.766: 99.7183% ( 2) 00:29:40.112 29.766 - 29.971: 99.7280% ( 1) 00:29:40.112 29.971 - 30.176: 99.7377% ( 1) 00:29:40.112 30.176 - 30.382: 99.7572% ( 2) 00:29:40.112 30.382 - 30.587: 99.7669% ( 1) 00:29:40.112 30.997 - 31.203: 99.7863% ( 2) 00:29:40.112 31.203 - 31.408: 99.8057% ( 2) 00:29:40.112 31.613 - 31.819: 99.8154% ( 1) 00:29:40.112 32.024 - 32.229: 99.8446% ( 3) 00:29:40.112 32.229 - 32.434: 99.8543% ( 1) 00:29:40.112 32.434 - 32.640: 99.8737% ( 2) 00:29:40.112 32.640 - 32.845: 99.8834% ( 1) 00:29:40.112 32.845 - 33.050: 99.8932% ( 1) 00:29:40.112 33.050 - 33.256: 99.9029% ( 1) 00:29:40.112 33.461 - 33.666: 99.9126% ( 1) 00:29:40.112 34.282 - 34.487: 99.9223% ( 1) 00:29:40.112 34.693 - 34.898: 99.9320% ( 1) 00:29:40.112 36.951 - 37.156: 99.9417% ( 1) 00:29:40.112 39.003 - 39.209: 99.9514% ( 1) 00:29:40.112 40.440 - 40.646: 99.9611% ( 1) 00:29:40.112 44.751 - 44.957: 99.9709% ( 1) 00:29:40.112 45.162 - 45.367: 99.9806% ( 1) 00:29:40.112 48.241 - 48.446: 99.9903% ( 1) 00:29:40.112 60.353 - 60.763: 100.0000% ( 1) 00:29:40.112 00:29:40.112 00:29:40.112 real 0m1.822s 00:29:40.112 user 0m1.016s 00:29:40.112 sys 0m0.805s 00:29:40.112 09:56:07 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:40.112 09:56:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:29:40.112 ************************************ 00:29:40.112 END TEST nvme_overhead 00:29:40.112 ************************************ 00:29:40.112 09:56:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:40.112 09:56:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:40.112 09:56:07 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:29:40.112 09:56:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.112 09:56:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:40.112 ************************************ 00:29:40.112 START TEST nvme_arbitration 00:29:40.112 ************************************ 00:29:40.112 09:56:07 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:40.679 EAL: TSC is not safe to use in SMP mode 00:29:40.679 EAL: TSC is not invariant 00:29:40.679 [2024-07-15 09:56:08.630647] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:44.861 Initializing NVMe Controllers 00:29:44.861 Attaching to 0000:00:10.0 00:29:44.861 Attached to 0000:00:10.0 00:29:44.861 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:29:44.861 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:29:44.861 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:29:44.861 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:29:44.861 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:29:44.861 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:29:44.861 Initialization complete. Launching workers. 00:29:44.861 Starting thread on core 1 with urgent priority queue 00:29:44.861 Starting thread on core 2 with urgent priority queue 00:29:44.861 Starting thread on core 3 with urgent priority queue 00:29:44.861 Starting thread on core 0 with urgent priority queue 00:29:44.861 QEMU NVMe Ctrl (12340 ) core 0: 5989.00 IO/s 16.70 secs/100000 ios 00:29:44.861 QEMU NVMe Ctrl (12340 ) core 1: 5955.33 IO/s 16.79 secs/100000 ios 00:29:44.861 QEMU NVMe Ctrl (12340 ) core 2: 5944.00 IO/s 16.82 secs/100000 ios 00:29:44.861 QEMU NVMe Ctrl (12340 ) core 3: 5948.00 IO/s 16.81 secs/100000 ios 00:29:44.861 ======================================================== 00:29:44.861 00:29:44.861 00:29:44.861 real 0m4.372s 00:29:44.861 user 0m12.612s 00:29:44.861 sys 0m0.790s 00:29:44.861 09:56:12 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:44.861 09:56:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:29:44.861 ************************************ 00:29:44.861 END TEST nvme_arbitration 00:29:44.861 ************************************ 00:29:44.861 09:56:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:44.861 09:56:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:29:44.861 09:56:12 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:29:44.861 09:56:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.861 09:56:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:44.861 ************************************ 00:29:44.861 START TEST nvme_single_aen 00:29:44.861 ************************************ 00:29:44.861 09:56:12 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:29:45.120 EAL: TSC is not safe to use in SMP mode 00:29:45.120 EAL: TSC is not invariant 00:29:45.120 [2024-07-15 09:56:13.065507] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:45.120 Asynchronous Event Request test 00:29:45.120 Attaching to 0000:00:10.0 00:29:45.120 Attached to 0000:00:10.0 00:29:45.120 Reset controller to setup AER completions for this process 00:29:45.120 Registering asynchronous event callbacks... 00:29:45.120 Getting orig temperature thresholds of all controllers 00:29:45.120 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:45.120 Setting all controllers temperature threshold low to trigger AER 00:29:45.120 Waiting for all controllers temperature threshold to be set lower 00:29:45.120 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:45.120 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:29:45.120 Waiting for all controllers to trigger AER and reset threshold 00:29:45.120 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:45.120 Cleaning up... 00:29:45.120 00:29:45.120 real 0m0.815s 00:29:45.120 user 0m0.002s 00:29:45.120 sys 0m0.803s 00:29:45.120 09:56:13 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.120 09:56:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:29:45.121 ************************************ 00:29:45.121 END TEST nvme_single_aen 00:29:45.121 ************************************ 00:29:45.121 09:56:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:45.121 09:56:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:29:45.121 09:56:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:45.121 09:56:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.121 09:56:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:45.121 ************************************ 00:29:45.121 START TEST nvme_doorbell_aers 00:29:45.121 ************************************ 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:45.121 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:45.378 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:45.378 09:56:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:29:45.378 09:56:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:29:45.378 09:56:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:45.945 EAL: TSC is not safe to use in SMP mode 00:29:45.945 EAL: TSC is not invariant 00:29:45.945 [2024-07-15 09:56:14.022620] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:46.203 Executing: test_write_invalid_db 00:29:46.204 Waiting for AER completion... 00:29:46.204 Asynchronous Event received. 00:29:46.204 Error Informaton Log Page received. 00:29:46.204 Success: test_write_invalid_db 00:29:46.204 00:29:46.204 Executing: test_invalid_db_write_overflow_sq 00:29:46.204 Waiting for AER completion... 00:29:46.204 Asynchronous Event received. 00:29:46.204 Error Informaton Log Page received. 00:29:46.204 Success: test_invalid_db_write_overflow_sq 00:29:46.204 00:29:46.204 Executing: test_invalid_db_write_overflow_cq 00:29:46.204 Waiting for AER completion... 00:29:46.204 Asynchronous Event received. 00:29:46.204 Error Informaton Log Page received. 00:29:46.204 Success: test_invalid_db_write_overflow_cq 00:29:46.204 00:29:46.204 00:29:46.204 real 0m0.889s 00:29:46.204 user 0m0.059s 00:29:46.204 sys 0m0.853s 00:29:46.204 ************************************ 00:29:46.204 END TEST nvme_doorbell_aers 00:29:46.204 ************************************ 00:29:46.204 09:56:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:46.204 09:56:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:29:46.204 09:56:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:46.204 09:56:14 nvme -- nvme/nvme.sh@97 -- # uname 00:29:46.204 09:56:14 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:29:46.204 09:56:14 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:46.204 09:56:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:46.204 09:56:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.204 09:56:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:46.204 ************************************ 00:29:46.204 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:46.204 ************************************ 00:29:46.204 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:46.462 * Looking for test storage... 00:29:46.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=68702 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 68702 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 68702 ']' 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.462 09:56:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:46.462 [2024-07-15 09:56:14.393228] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:46.462 [2024-07-15 09:56:14.393620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:47.030 EAL: TSC is not safe to use in SMP mode 00:29:47.030 EAL: TSC is not invariant 00:29:47.288 [2024-07-15 09:56:15.138219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.288 [2024-07-15 09:56:15.252404] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:47.288 [2024-07-15 09:56:15.252464] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:47.288 [2024-07-15 09:56:15.252472] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:29:47.288 [2024-07-15 09:56:15.252479] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:29:47.288 [2024-07-15 09:56:15.256947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.288 [2024-07-15 09:56:15.257058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.288 [2024-07-15 09:56:15.261666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.288 [2024-07-15 09:56:15.261590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:47.546 [2024-07-15 09:56:15.543599] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:47.546 nvme0n1 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:47.546 true 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721037375 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=68714 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:47.546 09:56:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:50.111 [2024-07-15 09:56:17.736413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:29:50.111 [2024-07-15 09:56:17.736620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:50.111 [2024-07-15 09:56:17.736636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:50.111 [2024-07-15 09:56:17.736646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.111 [2024-07-15 09:56:17.738259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.111 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 68714 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 68714 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 68714 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.6AHqRU 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.dOdf7M 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 68702 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 68702 ']' 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 68702 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 68702 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:29:50.111 killing process with pid 68702 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68702' 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 68702 00:29:50.111 09:56:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 68702 00:29:50.386 09:56:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:50.386 09:56:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:50.386 00:29:50.386 real 0m4.096s 00:29:50.386 user 0m12.393s 00:29:50.386 sys 0m1.155s 00:29:50.386 09:56:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.386 ************************************ 00:29:50.386 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:50.386 ************************************ 00:29:50.386 09:56:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:50.386 09:56:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:50.386 09:56:18 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:50.386 09:56:18 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:50.386 09:56:18 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:50.386 09:56:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.386 09:56:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:50.386 ************************************ 00:29:50.386 START TEST nvme_fio 00:29:50.386 ************************************ 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:50.386 09:56:18 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:50.386 09:56:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:51.327 EAL: TSC is not safe to use in SMP mode 00:29:51.327 EAL: TSC is not invariant 00:29:51.327 [2024-07-15 09:56:19.100939] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:51.327 09:56:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:51.327 09:56:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:51.895 EAL: TSC is not safe to use in SMP mode 00:29:51.895 EAL: TSC is not invariant 00:29:51.895 [2024-07-15 09:56:19.909344] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:51.895 09:56:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:29:51.895 09:56:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:29:51.895 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:29:51.895 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:51.895 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.895 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:51.895 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:51.896 09:56:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:29:52.153 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:52.153 fio-3.35 00:29:52.153 Starting 1 thread 00:29:53.087 EAL: TSC is not safe to use in SMP mode 00:29:53.087 EAL: TSC is not invariant 00:29:53.087 [2024-07-15 09:56:20.843077] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:55.622 00:29:55.622 test: (groupid=0, jobs=1): err= 0: pid=101537: Mon Jul 15 09:56:23 2024 00:29:55.622 read: IOPS=43.5k, BW=170MiB/s (178MB/s)(340MiB/2001msec) 00:29:55.622 slat (nsec): min=369, max=34772, avg=637.25, stdev=584.42 00:29:55.622 clat (usec): min=261, max=6135, avg=1470.56, stdev=489.80 00:29:55.622 lat (usec): min=262, max=6137, avg=1471.20, stdev=489.96 00:29:55.622 clat percentiles (usec): 00:29:55.622 | 1.00th=[ 881], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1188], 00:29:55.622 | 30.00th=[ 1254], 40.00th=[ 1303], 50.00th=[ 1352], 60.00th=[ 1401], 00:29:55.622 | 70.00th=[ 1450], 80.00th=[ 1549], 90.00th=[ 1975], 95.00th=[ 2606], 00:29:55.622 | 99.00th=[ 3425], 99.50th=[ 3916], 99.90th=[ 4752], 99.95th=[ 5211], 00:29:55.622 | 99.99th=[ 5932] 00:29:55.622 bw ( KiB/s): min=161312, max=181256, per=98.59%, avg=171504.00, stdev=9979.28, samples=3 00:29:55.622 iops : min=40328, max=45314, avg=42876.00, stdev=2494.82, samples=3 00:29:55.622 write: IOPS=43.4k, BW=169MiB/s (178MB/s)(339MiB/2001msec); 0 zone resets 00:29:55.622 slat (nsec): min=419, max=24599, avg=996.69, stdev=538.83 00:29:55.622 clat (usec): min=259, max=6212, avg=1472.80, stdev=491.26 00:29:55.622 lat (usec): min=260, max=6216, avg=1473.80, stdev=491.42 00:29:55.622 clat percentiles (usec): 00:29:55.622 | 1.00th=[ 889], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1188], 00:29:55.622 | 30.00th=[ 1254], 40.00th=[ 1303], 50.00th=[ 1352], 60.00th=[ 1401], 00:29:55.622 | 70.00th=[ 1450], 80.00th=[ 1549], 90.00th=[ 1991], 95.00th=[ 2573], 00:29:55.622 | 99.00th=[ 3425], 99.50th=[ 3916], 99.90th=[ 4817], 99.95th=[ 5276], 00:29:55.622 | 99.99th=[ 6063] 00:29:55.622 bw ( KiB/s): min=160800, max=180208, per=98.33%, avg=170552.00, stdev=9704.36, samples=3 00:29:55.622 iops : min=40200, max=45052, avg=42638.00, stdev=2426.09, samples=3 00:29:55.622 lat (usec) : 500=0.07%, 750=0.52%, 1000=1.30% 00:29:55.622 lat (msec) : 2=88.27%, 4=9.42%, 10=0.42% 00:29:55.622 cpu : usr=100.00%, sys=0.00%, ctx=23, majf=0, minf=2 00:29:55.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:29:55.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:55.622 issued rwts: total=87026,86768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:55.622 00:29:55.622 Run status group 0 (all jobs): 00:29:55.622 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (356MB), run=2001-2001msec 00:29:55.622 WRITE: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=339MiB (355MB), run=2001-2001msec 00:29:56.185 09:56:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:56.185 09:56:24 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:29:56.185 00:29:56.185 real 0m5.819s 00:29:56.185 user 0m2.409s 00:29:56.185 sys 0m3.340s 00:29:56.185 09:56:24 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.185 ************************************ 00:29:56.185 END TEST nvme_fio 00:29:56.185 ************************************ 00:29:56.185 09:56:24 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:29:56.185 09:56:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:29:56.185 00:29:56.185 real 0m28.712s 00:29:56.185 user 0m31.640s 00:29:56.185 sys 0m15.478s 00:29:56.185 09:56:24 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.185 09:56:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:56.185 ************************************ 00:29:56.185 END TEST nvme 00:29:56.185 ************************************ 00:29:56.185 09:56:24 -- common/autotest_common.sh@1142 -- # return 0 00:29:56.185 09:56:24 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:29:56.185 09:56:24 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:56.185 09:56:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:56.185 09:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.185 09:56:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.185 ************************************ 00:29:56.185 START TEST nvme_scc 00:29:56.185 ************************************ 00:29:56.185 09:56:24 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:56.442 * Looking for test storage... 00:29:56.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:56.442 09:56:24 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:56.442 09:56:24 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:56.442 09:56:24 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:56.442 09:56:24 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:56.442 09:56:24 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:56.442 09:56:24 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.442 09:56:24 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.442 09:56:24 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.443 09:56:24 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:56.443 09:56:24 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:56.443 09:56:24 nvme_scc -- paths/export.sh@4 -- # export PATH 00:29:56.443 09:56:24 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:56.443 09:56:24 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:29:56.443 09:56:24 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.443 09:56:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:29:56.443 09:56:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:29:56.443 09:56:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:29:56.443 00:29:56.443 real 0m0.187s 00:29:56.443 user 0m0.165s 00:29:56.443 sys 0m0.095s 00:29:56.443 09:56:24 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.443 09:56:24 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:29:56.443 ************************************ 00:29:56.443 END TEST nvme_scc 00:29:56.443 ************************************ 00:29:56.443 09:56:24 -- common/autotest_common.sh@1142 -- # return 0 00:29:56.443 09:56:24 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:29:56.443 09:56:24 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:29:56.443 09:56:24 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:29:56.443 09:56:24 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:29:56.443 09:56:24 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:29:56.443 09:56:24 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:56.443 09:56:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:56.443 09:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.443 09:56:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.443 ************************************ 00:29:56.443 START TEST nvme_rpc 00:29:56.443 ************************************ 00:29:56.443 09:56:24 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:56.699 * Looking for test storage... 00:29:56.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68956 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:56.699 09:56:24 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68956 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 68956 ']' 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.699 09:56:24 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:56.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.700 09:56:24 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.700 09:56:24 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:56.700 09:56:24 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:56.700 [2024-07-15 09:56:24.693569] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:56.700 [2024-07-15 09:56:24.693922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:29:57.634 EAL: TSC is not safe to use in SMP mode 00:29:57.634 EAL: TSC is not invariant 00:29:57.634 [2024-07-15 09:56:25.425191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.634 [2024-07-15 09:56:25.539834] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:57.634 [2024-07-15 09:56:25.539896] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:57.634 [2024-07-15 09:56:25.543135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.634 [2024-07-15 09:56:25.543130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.634 09:56:25 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.634 09:56:25 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:29:57.634 09:56:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:57.892 [2024-07-15 09:56:25.891647] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:57.893 Nvme0n1 00:29:57.893 09:56:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:57.893 09:56:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:58.151 request: 00:29:58.151 { 00:29:58.151 "bdev_name": "Nvme0n1", 00:29:58.151 "filename": "non_existing_file", 00:29:58.151 "method": "bdev_nvme_apply_firmware", 00:29:58.151 "req_id": 1 00:29:58.151 } 00:29:58.151 Got JSON-RPC error response 00:29:58.151 response: 00:29:58.151 { 00:29:58.151 "code": -32603, 00:29:58.151 "message": "open file failed." 00:29:58.151 } 00:29:58.151 09:56:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:58.151 09:56:26 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:58.151 09:56:26 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:58.719 09:56:26 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:58.719 09:56:26 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68956 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 68956 ']' 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 68956 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 68956 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:29:58.719 killing process with pid 68956 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68956' 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@967 -- # kill 68956 00:29:58.719 09:56:26 nvme_rpc -- common/autotest_common.sh@972 -- # wait 68956 00:29:58.977 00:29:58.977 real 0m2.515s 00:29:58.977 user 0m4.077s 00:29:58.977 sys 0m1.073s 00:29:58.977 09:56:26 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:58.977 09:56:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:58.977 ************************************ 00:29:58.977 END TEST nvme_rpc 00:29:58.977 ************************************ 00:29:58.977 09:56:27 -- common/autotest_common.sh@1142 -- # return 0 00:29:58.977 09:56:27 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:58.977 09:56:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:58.977 09:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.977 09:56:27 -- common/autotest_common.sh@10 -- # set +x 00:29:58.977 ************************************ 00:29:58.977 START TEST nvme_rpc_timeouts 00:29:58.977 ************************************ 00:29:58.977 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:59.236 * Looking for test storage... 00:29:59.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68997 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68997 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69025 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:59.236 09:56:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69025 00:29:59.236 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 69025 ']' 00:29:59.236 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.236 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.236 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.236 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.236 09:56:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:29:59.236 [2024-07-15 09:56:27.222184] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:59.236 [2024-07-15 09:56:27.222403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:30:00.171 EAL: TSC is not safe to use in SMP mode 00:30:00.171 EAL: TSC is not invariant 00:30:00.171 [2024-07-15 09:56:27.983686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:00.171 [2024-07-15 09:56:28.101015] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:30:00.171 [2024-07-15 09:56:28.101097] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:30:00.171 [2024-07-15 09:56:28.194172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.171 [2024-07-15 09:56:28.195016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.108 09:56:28 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:01.109 09:56:28 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:30:01.109 Checking default timeout settings: 00:30:01.109 09:56:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:30:01.109 09:56:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:01.388 Making settings changes with rpc: 00:30:01.388 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:30:01.388 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:30:01.646 Check default vs. modified settings: 00:30:01.646 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:30:01.646 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:30:01.905 Setting action_on_timeout is changed as expected. 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:30:01.905 Setting timeout_us is changed as expected. 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:30:01.905 Setting timeout_admin_us is changed as expected. 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68997 /tmp/settings_modified_68997 00:30:01.905 09:56:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69025 00:30:01.905 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 69025 ']' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 69025 00:30:01.905 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:30:01.905 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:30:01.905 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:30:01.906 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 69025 00:30:01.906 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:30:01.906 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:30:01.906 killing process with pid 69025 00:30:01.906 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69025' 00:30:01.906 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 69025 00:30:01.906 09:56:29 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 69025 00:30:02.473 RPC TIMEOUT SETTING TEST PASSED. 00:30:02.473 09:56:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:30:02.473 00:30:02.473 real 0m3.295s 00:30:02.473 user 0m5.548s 00:30:02.473 sys 0m1.191s 00:30:02.473 09:56:30 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.473 09:56:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:30:02.473 ************************************ 00:30:02.473 END TEST nvme_rpc_timeouts 00:30:02.473 ************************************ 00:30:02.473 09:56:30 -- common/autotest_common.sh@1142 -- # return 0 00:30:02.473 09:56:30 -- spdk/autotest.sh@243 -- # uname -s 00:30:02.473 09:56:30 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:30:02.473 09:56:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:30:02.473 09:56:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:02.473 09:56:30 -- common/autotest_common.sh@10 -- # set +x 00:30:02.473 09:56:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:02.473 09:56:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:02.473 09:56:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:02.473 09:56:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:02.473 09:56:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:02.473 09:56:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:02.473 09:56:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:02.473 09:56:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.473 09:56:30 -- common/autotest_common.sh@10 -- # set +x 00:30:02.473 09:56:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:02.473 09:56:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:02.473 09:56:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:02.473 09:56:30 -- common/autotest_common.sh@10 -- # set +x 00:30:03.039 setup.sh cleanup function not yet supported on FreeBSD 00:30:03.039 09:56:31 -- common/autotest_common.sh@1451 -- # return 0 00:30:03.039 09:56:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:03.039 09:56:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.039 09:56:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.299 09:56:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:03.299 09:56:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.299 09:56:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.299 09:56:31 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:03.299 09:56:31 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:03.299 09:56:31 -- spdk/autotest.sh@391 -- # hash lcov 00:30:03.299 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:30:03.299 09:56:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.299 09:56:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:03.299 09:56:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.299 09:56:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.299 09:56:31 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:30:03.299 09:56:31 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:30:03.299 09:56:31 -- paths/export.sh@4 -- $ export PATH 00:30:03.299 09:56:31 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:30:03.299 09:56:31 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:03.299 09:56:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:03.299 09:56:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721037391.XXXXXX 00:30:03.299 09:56:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721037391.XXXXXX.hIhYFUFNSS 00:30:03.299 09:56:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:03.299 09:56:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:03.299 09:56:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:03.299 09:56:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:03.299 09:56:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:03.299 09:56:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:03.299 09:56:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:03.299 09:56:31 -- common/autotest_common.sh@10 -- $ set +x 00:30:03.558 09:56:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:30:03.558 09:56:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:03.558 09:56:31 -- pm/common@17 -- $ local monitor 00:30:03.558 09:56:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.558 09:56:31 -- pm/common@25 -- $ sleep 1 00:30:03.558 09:56:31 -- pm/common@21 -- $ date +%s 00:30:03.558 09:56:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721037391 00:30:03.558 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721037391_collect-vmstat.pm.log 00:30:04.496 09:56:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:04.496 09:56:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:04.496 09:56:32 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:04.496 09:56:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:04.496 09:56:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:04.496 09:56:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:04.496 09:56:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:04.496 09:56:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:04.496 09:56:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:04.496 09:56:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:04.496 09:56:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:04.496 09:56:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:04.496 09:56:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:04.496 09:56:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:04.496 09:56:32 -- pm/common@44 -- $ pid=69250 00:30:04.496 09:56:32 -- pm/common@50 -- $ kill -TERM 69250 00:30:04.496 + [[ -n 1225 ]] 00:30:04.496 + sudo kill 1225 00:30:04.506 [Pipeline] } 00:30:04.529 [Pipeline] // timeout 00:30:04.536 [Pipeline] } 00:30:04.556 [Pipeline] // stage 00:30:04.562 [Pipeline] } 00:30:04.579 [Pipeline] // catchError 00:30:04.587 [Pipeline] stage 00:30:04.590 [Pipeline] { (Stop VM) 00:30:04.604 [Pipeline] sh 00:30:04.881 + vagrant halt 00:30:08.163 ==> default: Halting domain... 00:30:30.105 [Pipeline] sh 00:30:30.387 + vagrant destroy -f 00:30:33.671 ==> default: Removing domain... 00:30:33.683 [Pipeline] sh 00:30:33.967 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:30:33.976 [Pipeline] } 00:30:33.994 [Pipeline] // stage 00:30:34.002 [Pipeline] } 00:30:34.018 [Pipeline] // dir 00:30:34.025 [Pipeline] } 00:30:34.036 [Pipeline] // wrap 00:30:34.043 [Pipeline] } 00:30:34.054 [Pipeline] // catchError 00:30:34.065 [Pipeline] stage 00:30:34.067 [Pipeline] { (Epilogue) 00:30:34.082 [Pipeline] sh 00:30:34.364 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:34.378 [Pipeline] catchError 00:30:34.380 [Pipeline] { 00:30:34.395 [Pipeline] sh 00:30:34.674 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:34.674 Artifacts sizes are good 00:30:34.683 [Pipeline] } 00:30:34.701 [Pipeline] // catchError 00:30:34.713 [Pipeline] archiveArtifacts 00:30:34.721 Archiving artifacts 00:30:34.782 [Pipeline] cleanWs 00:30:34.794 [WS-CLEANUP] Deleting project workspace... 00:30:34.794 [WS-CLEANUP] Deferred wipeout is used... 00:30:34.802 [WS-CLEANUP] done 00:30:34.804 [Pipeline] } 00:30:34.824 [Pipeline] // stage 00:30:34.831 [Pipeline] } 00:30:34.850 [Pipeline] // node 00:30:34.857 [Pipeline] End of Pipeline 00:30:34.893 Finished: SUCCESS